ATP 2026: The Assessment Industry Is Growing Up on AI

ATP 2026: The Assessment Industry Is Growing Up on AI
Adarsh Sudhindra

Adarsh Sudhindra

Chief Innovations Officer

All Posts

I recently had the opportunity to participate in the ATP Innovations in Testing 2026 Conference in New Orleans, Louisiana, and as always, it was energising to be in a room filled with some of the sharpest minds in the global assessment ecosystem — industry leaders, academicians, psychometricians, technology experts, and practitioners from across North America and beyond.

Every ATP conference offers a useful pulse check on where the assessment industry is headed. This year, for me, two themes stood out very clearly.

And both, in different ways, signal that the industry is entering a more mature phase in its AI journey.

1. The industry is moving from AI excitement to AI discipline

Over the last two to three years, almost every conversation in assessment innovation was dominated by AI-led possibility.

How can AI help create better items?
How can it accelerate test construction?
How can it improve remote proctoring?
How can it detect cheating better?
How can it support marking?
How can it add new intelligence to every part of the assessment lifecycle?

The energy was understandable. AI opened the floodgates to new features, new efficiencies, and new ways of rethinking long-standing problems in assessment.

But ATP 2026 felt different.

This year, the conversation was less about adding more AI features and more about asking a tougher, more important question:

How do we make AI in assessment stable, secure, ethical, explainable, and dependable?

That, to me, is a sign of an industry that is growing up.

We are now moving beyond the initial wave of experimentation and feature enthusiasm into a phase of operational seriousness. People are no longer impressed by AI merely because it can do something new. They now want to know whether it can do that responsibly, consistently, and at scale.

There was a visible shift toward governance, data security, transparency, predictability, and accountability. The discussion is no longer only about innovation through feature expansion. It is about innovation through trust architecture.

This is important.

Because in assessment, unlike many other domains, trust is not optional. If AI is going to influence how tests are built, delivered, monitored, scored, or interpreted, then the systems behind it must be robust enough to stand up to scrutiny — technical scrutiny, regulatory scrutiny, psychometric scrutiny, and public scrutiny.

The next phase of AI in assessment will not be won by those who ship the most features.

It will be won by those who build the most trustworthy systems.

2. The future of assessment is becoming more human-centred, not less

The second big takeaway for me was equally important.

For all the fear that AI may make assessments colder, more automated, or more mechanical, the more exciting possibility is actually the opposite: AI can help make assessments more human-centred.

This year, there was noticeably more interest in multimodal assessmentmultimodal adaptivity, and in rethinking the overall experience of the test taker.

For years, many of these ideas lived largely in theory. We spoke about adaptive experiences, multiple modalities, better inclusion, more contextual assessment design, and more supportive feedback loops. But the technology was not always ready.

Today, it increasingly is.

With recent advances in AI, we are now much closer to a world where the same question or task can be delivered through different modalities, where assessments can become more accessible to a wider range of learners, and where the system can respond more intelligently to candidate needs.

More importantly, we are beginning to imagine assessments that do not feel like a one-time act of judgement alone.

They can become more responsive.
More supportive.
More aware of the human being behind the score.

If used carefully and ethically, AI can help identify patterns such as test anxiety, stress, disengagement, or persistent knowledge gaps, and enable interventions that improve the testing experience without compromising validity.

That is a profound shift.

Because the future of assessment should not be about making tests feel more like surveillance.

It should be about making them feel more like insight.

Less like a courtroom. More like a coach.

That does not mean reducing rigour. It means improving the experience without diluting the standards. It means preserving fairness while making assessment more inclusive, constructive, and humane.

And that, in my view, is where true innovation lies.

The bigger signal for our industry

What ATP 2026 made clear is that the assessment industry is no longer in an “AI discovery” phase.

It is entering an AI responsibility phase.

That is healthy. That is necessary. And frankly, that is overdue.

The real frontier now is not just building AI into assessments. It is building assessment ecosystems that know how to use AI with restraint, responsibility, and purpose.

The industry is beginning to recognise two truths:

First, AI without governance is risk.
Second, AI without human-centred design is missed opportunity.

So while the last few years were about what AI can add to assessment, this year felt like a turning point toward a deeper question:

What kind of assessment future do we actually want to build with AI?

If the answer is one that is more secure, more explainable, more inclusive, more multimodal, and more supportive of the learner and test taker, then I believe we are finally asking the right questions.

And when an industry starts asking better questions, better innovation usually follows.

Comments

guest

0 Comments
Inline Feedbacks
View all comments
Scroll to Top