This article is the first of a two-part series co-authored with Andy Thurai, vice president and principal analyst with Constellation Research.

Ethical, responsible, and governance of corporate AI use are being hotly debated these days, from boardrooms to the halls of legislatures across the globe. But on the other side of this discussion are the ethics and responsibilities that should be associated with personal AI use. When is AI usage acceptable to gain advantage for personal purposes and when is it potentially crossing the line?

AI — particularly generative AI — is finding mainstream adoption as a supportive tool for career advancement. AI is now capable of supporting the entirety of one’s career lifetime, starting with classwork and continuing with job hunting, promotions, career advancement, day to day job performance, and entrepreneurial ventures.

About 67% of job seekers are currently using AI systems and tools for everything from researching companies to drafting customized resumes to creating personalized cover letters, a recent survey released by Indeed finds.

AI is helping deliver advantages to those who use it effectively to advance their careers, studies confirm. Job applicants who were randomly assigned algorithmic assistance with their resumes — such as suggestions to improve spelling and grammar — were 8% more likely to be hired, according to one such study from the MIT Sloan School of Management.

In another study, 70% of 1,000 job hunters saw a higher response rate from companies when using ChatGPT. The study, published by ResumeBuilder.com (which provides AI-powered resumes and templates), also found 78% of candidates who used the chatbot scored interviews, while nearly six in 10 job hunters were hired after using the AI tool during their application process.

The availability and use of AI helps balance the scales more with employers using AI tools themselves to sift through resumes and identify qualified candidates. Thanks to AI, job hunters now have the ability to come to terms with keyword scanners and application-tracking tools. Such tools can even do mass mailings of personalized cover letters, and resumes, to targeted employers. With the help of Ai agents, prospective job hunters can do such massive personalization of their applications uniquely customized to each job application which was impossible to do in the past.

The benefits of personal AI usage extend beyond the job hunt. It is an editing tool, a suggestion generator, and helpful assistant for all career-related pursuits. AI-powered agents or assistants are springing up to provide support for a variety of endeavors. “I have a friend who is applying for his university studies,” relates Derek Kane, head of advanced analytics for GEA Group: “As he finished writing his college essay, I wanted to help him use the AI to his advantage. I told him to create a series of AI agents with different personas using LLMs. One persona is a grammar professor that helps with proofreading. Another is a subject matter expert to help him with his content. Another agent is an admissions counselor. He used this multi-agent approach to act as a trainer for him.” In these instances, AI served as a super trainer on demand.

But, Beware the Ethical Morass of AI

AI is a powerful tool individuals can use to expand their career horizons. Beyond employing AI for information sources, assistance in communicating, and agent assistance, are there points where personal AI usage may potentially cross ethical, moral, or even legal lines?

Consider the multiple points where the line may be crossed. There is the presentation of false expertise. Presenting information and claiming it’s one’s own gained through research, when it involves the stealth use of AI, can mislead others to overestimate one’s skills. There is also the risk of presenting inaccurate or erroneous information, as generative AI may deliver results tainted by hallucinations, aka “making stuff up.” This especially occurs if the AI encounters topics with which it has not been trained – costing one a job offer, client, or even inviting a legal case.

There is also the potential risk of plagiarism or even intellectual property theft. Most times, AI does not produce original content, at least not yet, but, rather, information synthesized from existing works. The ethical line of plagiarism, and legal line of obligation, may be crossed if one uses AI for documents, proposals, presentations, articles, marketing material, or even personal correspondence without double-checking to assure they are delivering original work.

When it comes to intellectual property, using material that is not original may cross the line as IP theft. Lawyers will have a field day trying to prove or deny that some work that is awarded professional recognition or leads to a promotion or appointment is not original if AI was used for portions of recipients’ work. Already, litigation has arisen about copyrighting AI-generated works, and a judge in one such case in August 2023 sided with the U.S. Copyright office, affirming that such content is not original. human-created works therefore can’t be patented or copyrighted in the United States.

Classroom cheating with AI may not only cross ethical lines, but also deepen socio-economic divides as well. Gaining advantage over other students, especially those not familiar or adept at prompt engineering or overall generative AI usage abilities. And there is a cost involved as well. Low-income families may not be able to afford to pay for the higher cost of highly specialized AI tools or be socially unaware of such tools because of their upbringing or knowledge limitation which puts them out of competition against the kids who get to use such tools.

Finally, and just as important, there’s the lack of originality that potentially accompanies the overuse of AI. Humans always take pride in being top of the food chain with their thinking process and brilliant minds. Allowing AI to create portions of, or the entire output, and claim that to be their original work may ultimately mean setbacks for human innovation. The temptations to accomplish things with AI may clash with these ethical considerations. We recently spoke with an individual who used ChatGPT to land a job for a highly competitive sales chief revenue officer position. He submitted a 90-day action plan that helped him clinch the position – and only revealed that the work was generated on the AI platform after he was hired.

This situation did put the hiring manager, the executive team and the board in a dilemma on how to deal with this scenario that they never faced before. Did his stealth use of AI provide him with an unfair advantage over many other qualified, more experienced, better-suited candidates? Or is everything fair in love and war and hiring? But, more importantly, what if one of the potential candidates sues them for malpractice or unfair hiring practice? Is it worth defending the candidate in that situation? Should the use of AI be disclosed? Should the candidate be forced to wear “AI helped me land this job, though I didn’t have the skills,” T-shirt on the job?

In another case, an artist won a competition for an artwork created using Midjourney, an AI program that generates images. The artist, Jason Allen, won the first prize at the coveted Colorado art festival for his AI created Théâtre D’opéra Spatial, created using Midjourney.

While Allen was upfront about portions of his use of AI, the incident still raised concerns about the ability of artists and non-artists to produce works at the push of a button that may eventually become indistinguishable from art that is painstakingly created by artists that are way more skilled but still no match to AI. To make matters worse, AI-generated art may pull images from other artists’ works making it impossible to create original work to compete against the collage of forgery.

In the second part of this series, we will discuss the questions employers are raising about personal AI usage, critical questions that AI users need to consider, and how to be mindful about AI usage.

Share.
Exit mobile version