njsbf new jersey state bar foundation logo a 501c3 non profit organization

Informed Citizens

are Better Citizens

by Sylvia Mendoza

Artificial intelligence (AI) sounds like something from a science fiction movie or a spy novel. The truth is we experience AI every day—when Google figures out what you’re searching for before you finish typing, when composing a text or email and auto-complete finishes your sentence, even when Amazon recommends a book—it’s all AI.

According to the National Artificial Intelligence Initiative Office (NAIIO), which provides technical and administrative support for the White House’s Select Committee on AI, the term artificial intelligence was coined in 1956 at Dartmouth College during a conference attended by computer science researchers from across the country. That meeting—where researchers discussed the possibility that machines could communicate, imitate human behavior, and solve problems—sparked decades of government and industry research in AI.

Along comes ChatGPT

More than 60 years after the Dartmouth meeting, ChatGPT made its debut in November 2022. Created by OpenAI, an artificial intelligence research company, ChatGPT is a form of generative AI. It lets users enter online prompts in conversational dialogue. ChatGPT responds to those prompts by “generating” a variety of content, including articles, social media posts, essays, computer code, emails, images, texts and videos.

The “GPT” stands for Generative Pre-Trained Transformer. GPT uses specialized algorithms to find patterns in data sequences. ChatGPT’s algorithm can produce “original” text that comes from large amounts of information that has been “scraped” off the internet. Scraping is the process of using bots to extract content and data from a website. Legal issues surrounding scraping include invasion of privacy, copyright, defamation, ethics, and more.

Even with the fears and unknowns of ChatGPT, it is the fastest-growing consumer internet app of all time, garnering an estimated 100 million monthly users in just two months. For context, Facebook took a little over four years to hit that many users, Twitter [now X] took a little over five years, and Instagram took two years.

Rebecca L. Rakoski, managing partner of a cybersecurity and data privacy law firm in Marlton, NJ, says one of the biggest concerns in AI is individual privacy rights. Once a person uses ChatGPT or a similar AI platform, it gets to “know” them, she says, and uses their personal information, previous experiences, biases, and style.

“We ‘feed’ AI with data,” says Rakoski, who also co-chairs the New Jersey State Bar Association’s AI Task Force. “That data is about someone. That person has rights about how that AI is used, so for me, I would like to ensure that the use of AI is well understood and transparent.”

Education issues

Emily J. Isaacs, executive director of the Office for Faculty Excellence and Academic Affairs at Montclair State University who has been teaching writing for more than 25 years, recognizes that the pace of AI growth may just be the beginning of disruption in higher education.

“The use of generative AI in education could be disruptive, much the way social media has turned out to be disruptive,” Professor Issacs says. “I did not realize how powerfully it would change how people behave, think and learn about politics, history, and cultural phenomena.”

In the education field, understandably, ChatGPT, as well as other AI chatbots, have brought out fears of cheating.

“For example, if you are asked to write a summary paper on the origins of the Civil War based on three readings your teacher has given you, and instead you enter that question into ChatGPT, without reading the assignment, you are being academically dishonest,” Professor Isaacs says.

At the other extreme, high school and college students have been falsely accused of using ChatGPT, which can affect not only their grades but the relationship between educator and student. Most educators use AI detectors, such as Turnitin or GPTZero, to weed out cheaters. The problem is that these detectors can be inaccurate, giving a false positive result. In fact, OpenAI shut down its AI detector tool in July 2023 due to its “low rate of accuracy.”

Another problem with ChatGPT is that it can produce inaccurate material. For instance, if a student uses it to produce a paper complete with footnotes—something ChatGPT can do—the final product could be riddled with factually inaccurate information. ChatGPT warns users that it could generate incorrect or misleading information, or biased content, which can be a problem for students who don’t verify and cite original sources. Professor Isaacs explains that this can lead to ethical and legal concerns.

Teresa Kubacka, a data scientist in Switzerland, told National Public Radio (NPR) that she tested ChatGPT by deliberately asking it about something that doesn’t exist—a made-up physical event. She relayed that it produced a “specific and plausible sounding” answer complete with citations. However, after a closer look, Dr. Kubacka said the citations, which named real, well-known physics experts, were bogus publications that these experts supposedly authored.

“This is where it becomes kind of dangerous,” Dr. Kubacka told NPR. “The moment that you cannot trust the references, it also kind of erodes the trust in citing science whatsoever.”

Professor Isaacs says, “What we do know is that generative AI can be a powerful tool for learners who are wide awake and paying attention when they interact with the tools, carefully selecting what they type into the Gen AI and just as carefully and critically reading what it produces.”

Educating students to use AI tools ethically and responsibly can better prepare them for a future where AI in the workplace will be commonplace. The U.S. Department of Education report, “Artificial Intelligence (AI) and the Future of Teaching and Learning: Insights and Recommendations,” addresses the importance of trust, safety, and appropriate guardrails to protect educators and students.

Need for AI regulations

With the rapid pace of AI advancement and its potential ripple effects, even the tech giants think guidelines and guardrails are needed. Sam Altman, CEO of OpenAI, suggested in a 2023 congressional hearing that the federal government should create licenses to ensure developers thoroughly test AI models before they are made available to the public. The federal government currently issues licenses for a variety of different industries. For example, the Federal Communications Commission (FCC) licenses the airwaves—from radio and television broadcasting to satellite communications and cell towers.

In a hearing before the Senate Judiciary Committee’s Subcommittee on Privacy, Technology and the Law, held in September 2023, Microsoft President Brad Smith called for legislators to create a “safety brake” for AI systems.

“If a company wants to use AI to, say, control the electrical grid or all of the self-driving cars on our roads or the water supply… we need a safety brake, just like we have a circuit breaker in every building and home in this country,” Smith said in the hearing. “Maybe it’s one of the most important things we need to do so that we ensure that the threats that many people worry about remain part of science fiction and don’t become a new reality.”

In July 2023, the Federal Trade Commission (FTC) launched an investigation to determine whether ChatGPT violated consumer protection laws through its collection of data. At press time, the investigation remains ongoing.

Orders from the Executive Branch

In October 2023, President Joseph Biden signed a 63-page executive order addressing concerns about AI. According to a White House fact sheet, the order “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, and advances American leadership around the world.”

One action in the executive order is the requirement that AI companies disclose the results of safety tests and directs the U.S. Commerce Department to oversee whether the tests and precautions are enough. To address the possible discriminatory use of AI, the executive order directed agencies “to combat algorithmic discrimination, while enforcing existing authorities to protect people’s rights and safety.” The order stipulates that landlords, federal benefits programs and federal contractors must be provided clear guidance to keep AI algorithms from worsening discrimination. In addition, the order stated that the criminal justice system, under the guidance of the Department of Justice and federal civil rights offices, should address algorithmic discrimination by developing best practices surrounding the use of AI in sentencing, parole and probation, as well as pretrial release and detention.

According to the National Conference of State Legislatures, 30 states have passed more than 50 laws over the last five years to address AI in some capacity. Only 12 states, including New Jersey, have enacted laws to create task forces to increase AI knowledge. Some states are focused on protecting consumer privacy data. New Jersey, along with 10 other states, has passed legislation to ensure that the adoption of AI does not perpetuate bias or add to societal discrimination, especially in hiring practices.

The list of what needs protection from AI advances keeps growing. The New Jersey State Bar Association AI Task Force was created, for example, to review the complex questions and ethical implications AI has on the practice of law, make recommendations for best practices for New Jersey attorneys, and examine potential downsides.

“Like any source being used, it is important to have policies and practices so that when AI is used, it is properly attributed and only used in appropriate situations,” explains Rakoski. “AI can be a tool in our toolbox, but it should not be the only tool.”

Discussion Questions

  1. What are the potential benefits of AI? What are the potential harms? Explain your answer.
  2. Should it be the government’s role to regulate AI? Why or why not?
  3. If you were creating regulations for AI, what would you include? Explain your answer.

Glossary Words
algorithm—a set of rules to be followed in calculations, especially by a computer.

BONUS CONTENT: The Dark Side of Artificial Intelligence

While artificial intelligence has its benefits, there is a dark side to AI. For example, in January 2024, an AI-generated robocall using President Joseph Biden’s voice advised people not to vote in the New Hampshire presidential primary. The call went out to approximately 5,000 New Hampshire voters before the state’s primary election. The call advised voters that “It’s important that you save your vote for the November election.” To be clear, voting in a primary election does not preclude a registered voter from voting in the November general election. After the incident, the Federal Communications Commission (FCC) adopted a ruling clarifying that generating a voice with AI for robocalls is illegal.

“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters,” FCC Chairwoman Jessica Rosenworcel said in a statement. “We’re putting the fraudsters behind these robocalls on notice.”

In March 2023, the Federal Trade Commission (FTC) issued a warning that AI has allowed scammers to enhance their “family emergency schemes.” This particular scam targets older folks with a voice model of a supposed family member who needs money because they are in some kind of trouble. With AI, the FTC explains, the scammer needs as little as three seconds of audio that they obtain from online posts to produce a realistic sounding message, often fooling the family member who wires money to help their loved one.

Deepfakes sparks proposed legislation

In January 2024, Taylor Swift had a brush with the dark side of AI when someone created pornographic “deepfakes” of the pop star and posted them to an online bulletin board. A deepfake is an AI-manipulated video or photo that uses someone’s likeness without permission. The deepfake images of Swift were taken down after 17 hours. In that time, they amassed 45 million views and had been reposted 24,000 times.

Deepfakes aren’t just reserved for celebrities. In October 2023, a group of boys at New Jersey’s Westfield High School created AI-generated pornographic images of female classmates without their knowledge.

The incident in Westfield heightened awareness about deepfakes and highlighted a bill introduced in the New Jersey Senate in March 2023. The bill would prohibit deepfake pornography and impose criminal penalties for non-consensual disclosure. A federal bill, called the Preventing Deepfakes of Intimate Images Act, first introduced in the U.S. House of Representatives in May 2023, would make it “a crime to intentionally disclose (or threaten to disclose) a digital depiction that has been altered using digital manipulation of an individual engaging in sexually explicit conduct.”

At press time, no action had been taken on either of these bills. —Jodi L. Miller