Professor Jenny George, Dean of Melbourne Business School, on Thursday highlighted the growing importance of responsible data-driven decision-making in the age of artificial intelligence while addressing the NXT Summit 2026 at Bharat Mandapam in New Delhi.
Speaking during a special address on entrepreneurship and technology on Day 1 of the summit, George said artificial intelligence has already normalised data-driven decision-making across sectors, from medical diagnostics to everyday tools such as digital navigation systems. She noted that AI-assisted systems are significantly improving productivity, citing research by Stanford and MIT that found AI tools increased the efficiency of customer service workers by 14–15%, with the largest gains seen among less experienced employees.
However, George cautioned that the rapid adoption of AI has also shifted decision-making authority within organisations. According to her, many decisions that were once taken by frontline professionals are increasingly being made within automated systems and by small teams that design and manage algorithms.
She warned that this shift could weaken accountability and critical judgment. Referring to research by the Wharton School involving over 13,000 participants, George said people often tend to accept AI-generated outputs without questioning them, even when they are clearly incorrect — a phenomenon researchers describe as “cognitive surrender”.
George illustrated the risks with Australia’s controversial “Robodebt” programme, where an automated system incorrectly issued hundreds of thousands of welfare debt notices due to flawed calculations. The issue, she argued, was not a technological failure but a failure of accountability, as no individual ultimately took responsibility for the system’s decisions.
She stressed that public trust in AI systems ultimately depends on institutions and the people responsible for deploying them. Studies conducted by Melbourne Business School, she said, show that visible human oversight is the most important factor in sustaining public confidence in automated decision-making.
Concluding her remarks, George urged organisations to clearly assign responsibility for AI-driven decisions and ensure that human oversight remains intact. “AI can make decisions, but only a person can answer for them,” she said, adding that while the race to develop AI capability continues, institutions must also ensure that human accountability remains central to its use.

