Higher education has undergone a seismic shift over the last 50 years, driven by expanding curricula, technological advancements, reduced funding, and a growing, more diverse student population. These changes have reshaped the way institutions operate, creating both opportunities and challenges.
After World War II, the G.I. Bill and federal student aid programs opened wide the doors to higher education, leading to greater enrollment and a broader range of student needs. As faculty time became increasingly stretched with research, service, and fundraising responsibilities, the role of academic advising—once the domain of faculty—became more complex. To meet the rising demands, specialized staff evolved to manage the growing demands in areas such as academic advising, career services, wellness, and engagement. With the advent of online education opportunities, public scrutiny of college costs, and increased competition for student enrollments, institutions are increasingly hard pressed to improve student outcomes.
Today, with the integration of artificial intelligence (AI) technology across industries and the launch of generative AI tools, the speed of change and the complexity of information have added significant pressure for timely adaptation. While AI offers the potential to streamline operations and enhance student outcomes, institutions remain cautious. Concerns about ethics, security, and the challenges of integration—particularly the need for solution transparency, robust protections for student privacy, and fears that AI tools could oversimplify personalized support or weaken the human connection critical to student advising—continue to shape the conversation, reflecting broader hesitancy in higher education.
Why Are Institutions Hesitant?
Ethics and Systemic Bias
Ethical concerns are a primary reason higher education leaders hesitate to adopt AI. Many worry that AI systems, if trained on historical data, could reinforce existing biases, particularly when making decisions related to student outcomes. For instance, first-generation or adult learners might not benefit equally from AI recommendations designed around traditional student profiles.
These concerns are amplified by the complexity of current AI systems, making it difficult for educators and administrators to fully understand how decisions are made. As one college administrator noted, “If the system has only been trained on traditional student data, will it continue to recommend career paths best suited for traditional students? How will it adapt to adult learners or first-generation students?” The extensive documentation, complex terminology, and long implementation cycles often make AI solutions seem more complex and less relevant to the use cases they aim to address.
The 2024 EDUCAUSE AI Landscape Study echoed these concerns, with respondents emphasizing “the ethical and transparent use of AI, no matter the specific application," highlighting that appropriate AI applications in education should focus on transparency, student support, and minimizing risks like bias and data misuse.
Resistance to Change
Beyond technical and financial barriers, cultural and organizational factors also play a significant role in the hesitancy surrounding AI adoption in higher education. The decentralized nature of many large universities, where decision-making and funding are distributed across multiple departments or colleges, complicates the implementation of AI tools. Securing consistent funding and institutional buy-in across units is often challenging, especially when many departments are already stretched thin with existing operational and staffing costs.
Smaller institutions, particularly community colleges and some private colleges, often have more centralized advising structures and are better positioned to adopt new technologies. However, even these institutions face resistance to change, especially if there is a perception that AI tools could undermine the personal, human-centered advising model that has traditionally been a hallmark of higher education.
Security and Data Privacy
Institutions are understandably wary of AI tools that access and process sensitive student data. With cyberattacks on the rise, higher education is a prime target for breaches. Colleges hold vast amounts of personal and academic records, and any compromise could have serious implications.
The Chronicle of Higher Education has repeatedly addressed these risks, including in a recent virtual forum titled AI’s Impact on College Cybersecurity. Institutions must ensure that any AI system they adopt meets rigorous security standards to protect student information. Without robust data safeguards, the risk of breaches will continue to hinder adoption.
Integration Challenges
Universities often rely on complex, decades-old administrative systems that are difficult to adapt to new technology. Attempts to integrate AI tools can disrupt critical operations such as course scheduling, financial aid processing and advising systems.
According to Inside Higher Ed’s third annual survey of campus chief technology officers, broad AI integration requires alignment with institutional goals set by presidents and provosts to ensure that AI technologies enhance rather than disrupt existing systems and have a clear roadmap to implementation.
Financial and Maintenance Barriers
The financial investment required for AI adoption is another factor driving hesitation. Deploying AI systems often involves high upfront costs, as well as expenses related to training, maintenance and integration.
One administrator shared that their institution spends more than $2 million on the subscription and implementation of a student success software platform that required 12 months to implement. Despite this investment, adoption among advisors was low, and the impact on student outcomes was limited.
For institutions with smaller budgets, committing to a new technology without clear institutional benefits can be very challenging. In most cases, expensive tools limit access to opportunities to only select regions and programs.
The Path Forward for Education and AI
Despite these challenges, AI holds significant promise for improving student outcomes and streamlining institutional operations. Thoughtful adoption will require institutions to address key barriers while aligning AI tools with their core mission of equitable education.
Starting with small-scale pilot programs offers a practical way to explore AI’s potential. These programs allow institutions to evaluate tools in a controlled environment, gather feedback and build trust among stakeholders. Transparency in how AI operates—and clear communication about its role as a complement to human advisors—will also be critical to its success.
AI has the potential to reduce administrative burdens, freeing staff to focus on meaningful student engagement. If implemented responsibly, it can help institutions balance the demands of innovation with their commitment to equity and academic excellence.
About the Authors
Thomas Dickson, EdD, is a higher education consultant with Advisor AI and founder of Beech Dickson Consulting with over 20 years of higher education advising and administrative experience. He brings extensive expertise in academic advising, student engagement, career services, first-generation student supports, and supporting cradle-to-career initiatives and institutional transformation efforts focused on improving student success.
Arjun Arora is a visionary leader at the forefront of driving innovation in higher education. As the founder of Advisor AI, his team is revolutionizing the way colleges and universities leverage technology to enhance student success. With a deep understanding of institutional challenges, more than a decade of experience implementing AI tools across Fortune 500, and challenges faced during his own education and career journey, Arjun is leading the charge to embed AI-driven tools to transform student outcomes, and ensure all students, irrespective of their backgrounds, are future-ready.
About Advisor AI
As a social impact company, Advisor AI leverages technology to address key challenges in higher education, including persistence, completion, and student-advisor satisfaction. By prioritizing ethical, secure and easy-to-implement solutions, Advisor AI enables institutions to adopt technology confidently, aligning with both institutional and student well-being.
In its commitment to ethical AI, Advisor AI has recently joined the EDSAFE AI Industry Council to work alongside industry and educational leaders in developing policies that guide safe, effective AI applications in education. This involvement ensures that AI-driven solutions prioritize student success and institutional integrity.