By Joseph Carvalko | 19 May 2023
Church and State
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” -Stephen Hawking
If quantum mechanics and computers were the paradigm shifts that defined the Twentieth Century, artificial intelligence will undoubtedly define the Twenty-first. For several weeks I’ve watched handwringing by everyone from my social media contacts, to talking TV heads and U.S. senators concerned by the latest deep learning neural networks developed by Microsoft, Google, Meta and others, which generate creative content from natural language descriptions.
The most talked about product in today’s news is ChatGPT, which responds to language prompts that result in the drafting of emails, essays, poetry, fictional stories, or computer code that executes novel Apps. Other products such as Dall-E, uses natural language descriptions to produce images of scenes, faces, often in motifs resembling a Dali or Rembrandt work. Following ChatGPT’s launch in November 2022, it took two months to reach 100 million consumers. Users of these products are rapidly finding ready-made markets. In early May, a deceptive AI-generated dystopian political advertisement was released by the Republican National Committee (RNC), offering a glimpse into how the latest AI tech could be used in next year’s election cycle. The ad prompted Congressmen Yvette Clarke (D-NY) to introduced a bill to require disclosures of AI-generated content in political ads. But, the issue won’t end there.
Upgrades to ChatGPT, referred to as AutoGPT claim increases in capability, creating apps and websites, conducting market research, and automating those objectives to give a detailed response, without human intervention. ChatGPT requires numerous prompts to achieved goals requiring multiple tasks, whereas AutoGPT, decides, autonomously, how to reach a goal.
Geoffrey Hinton, an AI pioneer, quit his job at Google, where he has worked for more than decade, so he can freely speak out about the risks posed by AI. “It is hard to see how you can prevent the bad actors from using it for bad things,” he said. https://t.co/ahvSZRavfN
— The New York Times (@nytimes) May 2, 2023
This rapidly changing technological landscape is a result of a new development referred to as transformer architecture-AI. Transformers were introduced in 2017 by a team at Google Brain and represents a quantum leap in natural language processing as they are more amenable to parallelization and training on larger datasets. On the performance end, outputs strikingly match human-level performance. On the horizon, more powerful transformer-AI products await. The latest referred to as GPT-4 appears to exhibit signs of artificial general intelligence, the most powerful and impressive AI to date. Developers of the GPT class of products claim they can take the written portion of the U.S. bar exam, producing essays that demonstrate its knowledge of the underlying legal principles at issue. When supplied a biochemical molecule it turns-on its biochemical expertise to provide if requested, variations of the molecule. Concern abounds about the potential of these systems to replace skilled workers, e.g., artists, authors engaged in the production of advertisements and entertainment, freeing students from having to write essays, and establishing new protocols for programmers and nonprogrammers alike in the creation of software. For a detailed explanation of the technology and early experiments with GPT-4, see Sparks of Artificial General Intelligence, https://doi.org/10.48550/arXiv.2303.12712.
Sparks of Artificial General Intelligence: Early experiments with GPT-4 https://t.co/IPHISUfe29
— Church and State (@ChurchAndStateN) May 21, 2023
For many years AI has been part of our lives, where algorithms function to drive social media, election results management, access to credit, stock markets, sports, gambling, hiring, biomedical analysis, medical devices, robotic surgery, driverless vehicles, and airplanes. In all respects modern life runs into AI at some level. But now more than ever, AI threatens to swallow a chunk of what was once an exclusively human-inspired domain: composition, art, the production of media, political persuasion. Until now media commentators, government functionaries, and elected officials have largely ignored AI and it’s implications. But, we ignore it at our peril, as this technology-induced paradigm change will necessitate a reorganization of society on many levels, not the least which would involve employment, education, justice, medicine and governing itself.
I am not particularly surprised at where AI might go. Unfortunately, I place little faith in government’s ability to respond or counter the effects of how it might negatively impact on our lives, primarily because policymakers have little understanding about how the technology works or how it will play out over the long term. Technologies such as nuclear power, have obvious devastating consequences and thus it required relatively little incentive to subject their use to strict regulation. But countless technologies exist that don’t immediately manifest their potential for planet altering effects. And when the consequences become apparent, policymakers ignore the problem because solutions require overcoming ignorance, self-interest, or large-scale skepticism. Examples are cigarette smoking which was found to cause cancer, climate-change caused by excessive fossil fuel use, and in the US, the use of assault rifles in senseless mass murders.
Within the next few years AI technologies, such as GPT-4 will permanently change the nature of creativity or problem solving in mathematics, law, medicine, as well as aiding those challenged by physical and psychological disabilities. AI’s power and success won’t stop we humans from composing, authoring, or inventing, as we are wired to express ourselves in ways that ensure our survival, both materially and aesthetically. Yet, without human prompts, this transformative AI would eventually likely create new inventions. Some of these will take form as AI generated human-like avatars (posing as actors, hucksters or politicians), humanoid robots (for companionship and utility). Coupled with human contribution to products and processes societal changes will dwarf the kind of transitions the world experienced going from horse-driven carts to high speed autos, bull horns to television, or snail mail to email.
"Worst case scenario [for AI] is… it controls humanity"
Stability AI CEO Emad Mostaque speaks to #BBCLauraK about the potential and the dangers of artificial intelligence
— BBC Politics (@BBCPolitics) May 13, 2023
As a general matter, technology ethicists have been warning for decades that AI raises what’s referred to as the capability claim and the value claim, especially as regards a general intelligence AI. The first claim questions whether AI can become sufficiently capable of inﬂicting major damage to well-being. GPT-type technologies, especially as to the autonomous versions should raise significant concerns among policy makers. The value claim follows from the capability claim, in that it questions whether AI will always act according to human values, such as “do no harm,” which are aligned with those of humanity, and if not whether its actions could cause significant harm. Time has come to bring together the full panoply of experts in science, technology, ethicists and policymakers to prepare for the anticipated seismic repercussions to life on the planet that the ever-advancing AI, if not regulated is bound to produce.
Specific to AI, the European Commission has identified applications of AI based on their potential for widespread harm, and has moved to install the European AI Act (EIA), which addresses risks of specific uses of AI. The EIA applies to AI machine learning, expert and logic systems, and bayesian or statistical approaches whose outputs “influence the environments they interact with,” which includes generative AI products like ChatGPT. The legislation distinguishes four categories of AI use: unacceptable AI risk, high-risk, limited risk, and minimal or no risk. On May 15, 2023, a committee in the European Parliament approved the EIA, which is expected to pass into legislation.
The #AIAct’s committee voted and approved a new version of the AI Act, a version that shows that the European Institutions and the AI Act rapporteurs have listened to the concerns of the creative community and to our requests.
This new text is a huge step in the right direction. pic.twitter.com/1buAoFtQEx
— European Guild for AI Regulation (@Egair_official) May 12, 2023
Earlier, the European Parliament resolution of 16 February 2017 offered recommendations to the Commission on Civil Law Rules on Robotics, including stating the principle that Asimov’s Laws must be regarded as being directed at the designers, producers and operators of robots, including robots assigned with built-in autonomy and self-learning, since it claims that Asimov’s laws cannot be converted into machine code, presumably to prevent a robot from acting against the interest of humanity. It appears that ChatGPT and certainly GPT-4’s have the potential to appreciate abstractions and understand human motives, as well as create code, self learn, and therefore lead to the instantiation of the technology into robots broadly speaking that produce autonomous behavior. The AIA may soften the impact of AI by ensuring that in extreme cases, e.g., autonomous weapons or medical devices, developers of the technology will be subjected to a measure of scrutiny and control.
Over the course of history, the US has established numerous regulatory agencies to deal with technology. For example, the Federal Communications Commission (FCC) and the Federal Drug Administration (FDA) regulate communications, drugs, and medical devices respectively. But when it comes to the more amorphous forms of digital technology, such as data gathering, data security or the reach of the Internet, regulation has remained lethargic. The government has yet to regulate any concrete aspect of social media. To successfully regulate any technology requires experts in the technology and its application. This has been true for communications as well as medical technology. For example as to drugs the FDA enlists chemists, physicians, statisticians, patients, and policy experts to effectively regulate. The Select Committee on AI, created in June 2018, advises the White House on interagency AI R&D priorities. But neither the Executive Branch nor Congress has yet to assemble any meaningful AI oversight commission. And given the current dysfunction in the U.S. Congress and the breadth of commercial interests at stake in the generative field, such as what GPT-4 now represents, it’s unlikely such a commission could be established in time to make any meaningful change in the multiple directions this technology might gravitate and depths to which it may penetrate.
AI dangers go beyond our imagination.
‘If it goes wrong'
Sam Altman, the founder of ChatGPT faced the senate hearing on the urgent action for A.I. safety and regulation.
The responses from Sam Altman were noteworthy.
The recording is 4h 38mins long
4 things you need to know: pic.twitter.com/Vw6UZMf11i
— Ruben (@RubenHssd) May 18, 2023
In the U.S. we are in an era rife with unbridled commercialization, fierce competition, and political instability, each in their own way helping to push the boundaries of technological conquest. However, with advances in know-how comes responsibility. Powerful tools in the hands of irresponsible agents always threaten the fabric of civilization. The latest transformer-AI technology raises questions as to the suitable level of unregulated institutional engagement, or how far an institution, such as a corporation should be allowed to control the operation, distribution and use of a technology, especially one that has a significant potential for deleteriously effecting society. We now face a new threat, brought about by transformer-based AI. Query will our government heed the warnings and work alongside developers in an effort to advance humane goals or simply allow the technology to propagate, unconstrained by the value, to “do no harm.”
— Church and State (@ChurchAndStateN) February 5, 2020
Joe Carvalko on the Intersection of Law, Science, and Technology
LIVE: OpenAI CEO Sam Altman testifies during Senate hearing on AI oversight — 05/16/23
Full interview: “Godfather of artificial intelligence” talks impact and potential of AI
Hear professor’s prediction on the future of AI tools
Be sure to ‘like’ us on Facebook