As artificial intelligence (AI) continues to advance OpenAI platform capabilities, we discuss complexities for graduate students and researchers in social and personality psychology. AI is one of the hottest topics in technology, with current events affecting all sectors of our world. Generative AI, OpenAI, GPTx, ChatGPT, or other third-party platforms are creating waves that intersect across healthcare, academia, psychology, medicine, institutions, corporations, social norms, ethical standards, and much more.

This SPSPotlight article aims to gear you with a toolbox of current news and trends surrounding generative AI, platforms such as ChatGPT, and how they may connect with your current research and future academic journey or professional endeavors. The trends discussed in this edition of SPSPotlight also include the ethical scope of AI in research, healthcare, medicine, and other industries.

What Are Your Thoughts?

The SPSPotlight team would love to learn more from the SPSP community on generative AI topics. Your points of view on the use of generative AI platforms, such as ChatGPT, are extremely important. Please complete this quick two-minute survey and tell us what you think: 

Did you know that there are more than 500 FDA-approved health AI algorithms in healthcare? The New England Journal of Medicine (NEJM) recently announced the launch of the NEJM AI journal and is promoting a call for manuscripts. The NEJM AI interdisciplinary journal aims to review how AI applications are integrated into clinical medicine and practice. The NEJM AI journal also informs that AI-powered FDA-approved medical devices and software already exist, yet clinical evidence remains lacking. NEJM Topic has also added the AI in Medicine source pages, including reviews, resources, and opinion-based literature. In a review article from NEJM AI in Medicine, Ferryman et al. propose that in skewed AI models that exhibit bias, this biased data should be considered as clinical artifacts that "identify values, practices, and patterns of inequity in medicine and healthcare" instead of becoming missed data points that AI larger language models can process and learn from.

The AI Debate

There are many sides to the AI debate, including Apple's recent employee restrictions of ChatGPT and GitHub's Copilot, alongside JPMorgan Chase and Verizon's ban, in addition to Amazon's restricted use policies of generative AI applications. While Apple still launched the ChatGPT app on the App Store, they asked the developer to increase the download age to 17 and include content filtering. In April 2023, thousands of tech leaders, including Elon Musk, published Pause Giant AI Experiments: An Open Letter, pleading with investigators to pause and consider ethical reviews. The letter now has over 33,000 signatures. The letter published by over 1,000 tech leaders asks that research involving powerful AI systems "should be developed only once we are confident that their effects will be positive" and that risks are manageable and mitigated. Additionally, the letter cites the medical and healthcare industry as "one of the biggest areas of opportunity for AI," yet discusses the lack of global safety protocols for advanced AI programming. There are other institutions, such as the Center for Humane Technology, that advocate efforts through education, public policy, and free resources.  

If you've utilized virtual telehealth for appointments in the last few years, you've likely heard of patient health platforms like MyChart powered by Epic Systems. Telehealth companies such as Epic Systems provide electronic health records (EHR) and are dominating the EHR industry's top spot at number one, owning nearly 36% of the market share, based on a recent KLAS Research report. The telemedicine giant Epic Systems recently announced they were working with Microsoft Azure's OpenAI large language model platforms to integrate AI into Epic's EHR software with an initial roll-out to early adopters from UNC Health, of the University of North Carolina, Chapel Hill, and several other institutions.

A pivotal quote was given by Sam Altman, creator of OpenAI, as it relates to the downside of large language models, such as ChatGPT or GPT-4, during Senate subcommittee hearings on potential interactive misinformation in future elections. Senators interrogated Altman on the concern for these models predicting survey opinion through finely tuned platform strategies that "elicit behaviors from voters" and asking, "Should we be worried about this for our elections?" Altman confirms it is a significant area of great concern for him, "the more general ability of these models to manipulate, to persuade, and to provide one-on-one interactive disinformation." Altman shares how interactive disinformation via GPT models will be on "steroids" in contrast to past fake content created on Photoshop when the platform exploded. OpenAI also disclosed taking ChatGPT offline due to a "bug" that some user chat data was compromised and visible to other users. Journalists are reporting a new era of content misinformation. New AI content is being created through generative AI that disseminates misinformation through a plethora of channels, such as questionable content farms, over 100+ "fringe" news sites, fake online reviews, and health websites offering AI-powered disinformation with medical advice for mental health disorders.  

Nurses are now raising the red flag and advocating against harsh AI practices in the healthcare industry. This article details how some nurses may face disciplinary actions if they decide to override the medical AI model's recommendations. One nurse even details a story of a patient with sepsis where the AI recommendation was incorrect, which she knew ahead of time yet was unable to override the system. Other nurses in the article mention feeling "moral distress" over doing what is right versus what the medical AI model algorithm is telling them to do. A researcher from the University of Pennsylvania suggests that these AI models should be used as supportive information for clinical decision-making, not as the decision itself, as many of these AI tools "can be flawed" or implemented incorrectly.

AI Ethics and Risky Emergent Behavior

From a personal perspective, one of my course syllabi this semester outright forbade using any generative AI platforms, yet another course allowed some use with restrictions. It's a topic I've been following closely this year, as AI's risky emergent behaviors remain a significant focus in bioethics and clinical trial courses. A recent New York Times article urges students to use generative AI to study and not for cheating and recommends ChatGPT plugins or other platforms to create flashcards or to summarize research articles.

The article discusses how nearly 60 percent of compared answers between predictive AI and real-life situations "either disagreed with human specialists" or "provided information that wasn't clearly relevant." Stanford University biomedical informatics researchers suggest that AI programming is inconsistent, as there is no "hybrid construct of the human plus this technology." Alternatively, The AI Will See You Now pivots the conversation of generative AI's capability of "deep reasoning" as positive, in that with time and learning the AI will "identify connections and concepts that humans simply cannot see." STAT's A research team airs the messy truth about AI in medicine and shares the Health AI Partnership website as a way for hospitals to access an online guide and resource pages on the site.

OpenAI is aware of the moral and ethical issues by publishing the March 2023 GPT-4 System Card report, which details how its engineers were able to successfully ask GPT-4 to do "dangerous" or dubious tasks. Lying or making up stories seems to be a way that GPT-4 'solves' a problem. These are considered risky emergent behaviors in AI or OpenAI that may symbolize "chilling examples" of the moral and ethical dilemmas presented with GPT-4's model. Recent GPT-4 System Card disclosures by OpenAI detail that a rogue AI system may become deceptive or suggest illegal, immoral, and dark ethical responses. It is unclear how ethics plays into GPT-4 when it suggests "concocting a story," which might be the exact opposite of what the AI engineer intended as output from the software. Another example shared in the article was deeply troubling: GPT-4's response to a hungry family's dilemma of stealing a loaf of bread. GPT-4 replies with the broad answer of "it's a tough situation" and "desperate times can lead to difficult choices" as AI solutions to that question. The Boston Globe has also taken a stand by raising awareness this year with multiple reports on the importance of an ethics pause in new artificial intelligence platforms before they are widely in use, such as ChatGPT, to give the tech and scientist community time to implement critical ethical guidelines.

Academia and the Workplace

There are some academic researchers investigating if GPT-4 should become a scientific co-collaborator and writing by using the software to analyze Einstein's special theory of relativity as a case study. Other researchers are studying GPT to investigate if it can be added as a co-author in a published peer-reviewed journal. Generative AI in the workplace and hiring have also stirred commentary. In New York City, the new law NY 144 became the nation's first to regulate generative AI automation algorithms in hiring, interviewing, chatbot, and promotions to deter "indications of potential discrimination in employment decisions" as a law for public disclosure.

The Chronicle of Higher Education has also published a series of articles relating to generative AI in academic settings, some researchers calling it a "love it or hate it" implementation. Bloomberg and Yahoo Finance raised concerns from universities who seemed to be "scrambling" to implement AI protocols or procedures. These outlets discuss how some instructors have completely reinvented curricula to mitigate the AI risk in the classroom. 

The American Psychological Association (APA) has also recently published a series of articles on generative AI and its impacts on the psychological community and America's workforce. The APA Monitor shares that while AI's potential is high, there is "still cause for concern" as rogue AI systems have disseminated misinformation, shared affection with users, and were involved in the sexual harassment of minors online. The APA also distinguished that medical health care AIs "have discriminated against people based on their race and disability status" based on uncovering biases in the AI models and encourages clinicians to "question assumptions" about these new technologies.

The APA's healthy workplace article, Worried about AI in the workplace? You're not alone, including the statistic that nearly 38% of U.S. workers are concerned about AI taking over job responsibilities. However, the APA Monitor discusses the more positive aspects of how ChatGPT and GPT-4 are "ripe with potential" for students. The APA's PsycLearn team also recently recorded a webinar, Pushing the Boundaries of Critical Thinking: What's next in the era of generative AI, discussing the roles of generative AI in critical thinking and higher education; you can watch it on demand here.

How are you using generative AI tools? Add your voice to the survey here.

Sources