Generative AI in business schools: friend or foe?

Date:

Since tools like ChatGPT burst into higher education, debate has focused on two extremes: either students are all committing underhanded academic fraud and plagiarism or Artificial Intelligence will magically revolutionise learning. The latest research project I co-authored with Anna Holland, and carried out among recent Management graduates in the United Kingdom, suggests something more complicated and surprisingly more human. Generative AI tools such as ChatGPT are increasingly used in business and management education for tasks like analysing cases, brainstorming ideas, and drafting reports, improving efficiency and personalised learning but also raising concerns about academic integrity and assessment design. AI literacy and ethical use of algorithmic tools are becoming essential managerial skills. In a qualitative study focusing specifically on business students, we explored how they actually used ChatGPT in their final year and how they felt about it. To capture these experiences, we conducted 15 in-depth semi-structured interviews and thematically analysed them, focusing on how students used ChatGPT in their studies and how they perceived its impact on their academic work and their peers’ behaviour. How much of a ‘no brainer’ is ChatGPT for research and assignments? The students we interviewed described three overlapping concerns, which together help explain both their enthusiasm and their unease: Immediacy: the practicality of a 24/7 study buddy Students were open about the fact that ChatGPT had become part of their ordinary study toolkit, along with search engines and lecture recordings, but faster and more conversational in comparison. They used widely adopted large language model-driven technology to summarise articles, generate examples, explain complex theories in simpler language and help plan assignments. Several described it as a way to “get unstuck” when staring at a blank page. What mattered most was not just usefulness but speed and emotional reassurance. Unlike professors’ office hours or e-mail, AI is instantly available and without judgement. Some interviewees said they used ChatGPT to check whether they had understood a concept correctly before writing it up in their own words, or for suggestions on how to structure an essay. For many, the new technology felt like having a private tutor who never sleeps. But whose convenience also raised deeper questions. If AI can always “rescue” you at the last minute, are you really learning, or just producing? Equity: who gets the ‘good’ AI? The students who took part in our study didn’t simply worry about whether AI was allowed. They also worried about who could access the most powerful tools. Those who paid for smarter, premium versions felt they were getting more accurate, more detailed support than peers sticking to free tools. Some students saw this as just another form of educational inequality. Others were uneasy that success on assessments might increasingly depend on whether you could pay for better algorithms, but also whether they have the necessary skills to prompt the system for optimal results. Just because students are young, it does not automatically make them digitally native. At the same time, several interviewees argued that AI could make higher education fairer. Students with dyslexia, ADHD or other conditions described using ChatGPT to help with planning, time management or turning rough notes into clearer sentences. International students said it helped them write in more polished academic English. For them, AI felt less like “cheating” and more like “levelling up” – a reasonable adjustment or language support. This tension, between AI as a leveller and AI as a new source of advantage, makes equity central to how students experience these tools. Integrity: drawing the line in a grey area All the students we spoke to knew that “copy-pasting from ChatGPT” into an assignment would be considered as cheating. But they also described a wide grey area where university rules felt vague or inconsistent. Was it acceptable to ask ChatGPT for feedback on a draft paragraph? To suggest alternative headings? To generate a list of arguments they then follow up on themselves by accessing the original sources of information provided by ChatGPT? Different courses, and even different lecturers, gave different answers, leaving students unsure about what counts as legitimate assistance versus academic misconduct. This uncertainty made some students anxious about being accused of misconduct even when they believed they were acting honestly. Group work added another layer of risk: several participants feared that one team member might lean heavily on AI, triggering plagiarism-detection software or an investigation that could affect the whole group. Is the ‘AI bias’ off-putting for graduate employers? Beyond university rules, students worried about how employers would view their qualifications. A recurring theme was fear that future recruiters might dismiss recent graduates’ work as “AI-generated”, devaluing the years of effort they had invested. Even those who used ChatGPT sparingly felt that their cohort might be seen as “AI-made”, regardless of individual behaviour. This is an interesting finding in our study as not much empirical work has been done on this aspect. There is currently little to no evidence that employers broadly distrust university degrees because of GenAI. The evidence we have to date suggests that hiring managers are increasingly sceptical of graduates’ application written work, but simultaneously seek graduates with AI skills. The blurred relationship between student work and their ability may affect how credentials signal competence. Employers have already increasingly turned to skills verification rather than credentials alone. What universities should do next Our findings suggest that universities need to move beyond simple messages about banning or embracing generative AI. Students are already integrating these tools into their everyday study. The question is whether institutions will help them do so transparently, equitably, and with academic integrity. Firstly, rules about AI use need to be clearer and more consistent. Rather than broad warnings about “misuse of ChatGPT”, students need concrete, discipline-specific examples of what is allowed, and why. This includes acknowledging that some uses (for accessibility or language support, for instance) may be legitimate and even desirable. Secondly, assessment design should focus more on process as well as product. Students could be asked to explain how they used AI in an assignment, reflect on its limitations and show the steps they took to verify information. This makes AI use visible and accountable, rather than something to hide by clearly stating where it was used in a piece of work, as students would for citing references in a footnote, for instance. Thirdly, universities should consider equity explicitly. If some students can buy access to far more powerful tools than others, that has implications on fairness. Institutions could respond by providing standardised AI tools, and teaching all students how to use them critically, or by redesigning assessments so that success depends less on access to premium systems. In its latest Digital Education outlook report, Exploring Effective Uses of Generative AI in Education, the OECD urges education stakeholders to encourage “inclusive, trustworthy and meaningful uses of GenAI in education” in alignment with educational objectives. Listening to students’ concerns about GenAI Generative AI in business schools: friend or foe? GenAI offers students and teachers in business schools a wealth of benefits, but this tech “shortcut” is at odds with inclusive and meaningful learning. A graduate hat above a human hand against a blue-lit background How can business schools best navigate the AI era? Does GenAI in higher education constitute a level playing field where educational inequality and diverse learning needs are concerned? The students in our study were not reckless rule-breakers or naive digital natives. They were thoughtful about the benefits and risks of AI, and keen to protect the value of their degrees. If universities ignore this perspective, they risk sending out the message: “Integrity is only about catching cheats,” rather than about building trust. If, instead, they engage with students’ real experiences of immediacy, equity, and integrity, generative AI could become an opportunity to rethink what meaningful learning and fair assessment in higher education look like in the age of AI, rather than a threat that quietly undermines them. A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!

spot_imgspot_imgspot_img

Share post:

More like this
Related

Strategic Overview – US & Russia Relations Is the Pivot Point – March 16, 2026

In this March 16, 2026 Promethean PAC Monday night...

Trump Meets With Irish Prime Minister Micheal Martin

Trump Meets With Irish Prime Minister Micheal MartinEpochTV LiveMar-17-2026President...

Beyond the Spain is Different slogan: putting an end to tourist clichs about Spain

Spain’s image abroad and how it has evolved over...

Swiss Government Urges Electorate to Reject Population Cap Ahead of Referendum

A Swiss national flag flutters in a blue sky...