History
editThe academic discipline of artificial intelligence was founded at a research workshop at Dartmouth College in 1956, and has experienced several waves of advancement and optimism in the decades since.[1] Since its founding, researchers in the field have raised philosophical and ethical arguments about the nature of the human mind and the consequences of creating artificial beings with human-like intelligence; these issues have previously been explored by myth, fiction and philosophy since antiquity.[2] These concepts of automated art date back at least to the automata of ancient Greek civilization, where inventors such as Daedalus and Hero of Alexandria were described as having designed machines capable of writing text, generating sounds, and playing music.[3][4] The tradition of creative automatons has flourished throughout history, such as Maillardet's automaton, created in the early 1800s.[5]
Artificial Intelligence is an idea that has been captivating society since the mid-20th century. It began with science fiction familiarizing the world with the concept but the idea wasn't fully seen in the scientific manner until Alan Turing, a polymath, was curious about the feasibility of the concept. The development of AI was not very rapid at first because of the high costs and the fact that computers were not able to store commands. This all changed during the 1956 Dartmouth Summer Research Project on AI where there was inspiring call for AI research which led it to be a landmark event as it set the precedent for two decades of rapid advancements in the field. (Anyoha).
Since the founding of AI in the 1950s, artists and researchers have used artificial intelligence to create artistic works. By the early 1970s, Harold Cohen was creating and exhibiting generative AI works created by AARON, the computer program Cohen created to generate paintings.[6]
Markov chains have long been used to model natural languages since their development by Russian mathematician Andrey Markov in the early 20th century. Markov published his first paper on the topic in 1906,[7][8][9] and analyzed the pattern of vowels and consonants in the novel Eugeny Onegin using Markov chains. Once a Markov chain is learned on a text corpus, it can then be used as a probabilistic text generator.[10][11]
The field of machine learning often uses statistical models, including generative models, to model and predict data. Beginning in the late 2000s, the emergence of deep learning drove progress and research in image classification, speech recognition, natural language processing and other tasks. Neural networks in this era were typically trained as discriminative models, due to the difficulty of generative modeling.[12]
In 2014, advancements such as the variational autoencoder and generative adversarial network produced the first practical deep neural networks capable of learning generative, rather than discriminative, models of complex data such as images. These deep generative models were the first able to output not only class labels for images, but to output entire images.
In 2017, the Transformer network enabled advancements in generative models compared to older Long-Short Term Memory models[13], leading to the first generative pre-trained transformer (GPT), known as GPT-1, in 2018.[14] This was followed in 2019 by GPT-2 which demonstrated the ability to generalize unsupervised to many different tasks as a Foundation model.[15]
In 2021, the release of DALL-E, a transformer-based pixel generative model, followed by Midjourney and Stable Diffusion marked the emergence of practical high-quality artificial intelligence art from natural language prompts.
In March 2023, GPT-4 was released. A team from Microsoft Research argued that "it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system".[16] Other scholars have disputed that GPT-4 reaches this threshold, calling generative AI "still far from reaching the benchmark of ‘general human intelligence’" as of 2023.[17]
Audio deepfakes
editInstances of users abusing software to generate controversial statements in the vocal style of celebrities, public officials, and other famous individuals have raised ethical concerns over voice generation AI.[18][19][20][21][22][23] In response, companies such as ElevenLabs have stated that they would work on mitigating potential abuse through safeguards and identity verification.[24]
Concerns and fandom have spawned from AI generated music. The same software used to clone voices have been used on famous musicians' voices to create songs which mimic their voices, gaining both tremendous popularity and criticism.[25][26][27] Similar techniques have also been used to create improved quality or full-length versions of songs that have been leaked or have yet to been released.[28]
Generative AI has also been used to create new digital artist personalities, with some of these receiving enough attention to receive record deals at major labels.[29] The developers of these virtual artists have also faced their fair share of criticism for their personified programs, including backlash for "dehumanizing" an artform, and also creating artists which create unrealistic or immoral appeals to their audiences.[30]
Cybercrime
editGenerative AI's ability to create realistic fake content has been exploited in numerous types of cybercrime, including phishing scams.[31] Deepfake video and audio have been used to create disinformation and fraud. Former Google fraud czar Shuman Ghosemajumder has predicted that while deepfake videos initially created a stir in the media, they would soon become commonplace, and as a result, more dangerous.[32] Additionally, large-language models and other forms of text-generation AI have been at a broad scale to create fake reviews on ecommerce websites to boost ratings.[33] Cybercriminals have created large language models focused on fraud, including WormGPT and FraudGPT.[34]
Recent research done in 2023 has revealed that generative AI has weaknesses that can be manipulated by criminals to extract harmful information bypassing ethical safeguards. The study presents example attacks done on ChatGPT including Jailbreaks and reverse psychology. Additionally, malicious individuals can use ChatGPT for social engineering attacks and phishing attacks, revealing the harmful nature of these technologies.[35]
https://doi.org/10.48550%2Farxiv.2303.04226
https://www.nytimes.com/2023/04/19/arts/music/ai-drake-the-weeknd-fake.html
https://abcnews.go.com/US/ai-songs-mimic-popular-artists-raising-alarms-music/story?id=104569841
https://www.complex.com/music/a/eric-skelton/fans-using-artificial-intelligence-rap-snippets
https://nypost.com/2023/09/08/warner-music-signs-first-ever-record-deal-with-ai-pop-star/
- ^ Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. p. 109. ISBN 0-465-02997-3.
- ^ Newquist, HP (1994). The Brain Makers: Genius, Ego, And Greed In The Quest For Machines That Think. New York: Macmillan/SAMS. pp. 45–53. ISBN 978-0-672-30412-5.
- ^ Noel Sharkey (July 4, 2007), A programmable robot from 60 AD, vol. 2611, New Scientist, archived from the original on January 13, 2018, retrieved October 22, 2019
- ^ Brett, Gerard (July 1954), "The Automata in the Byzantine "Throne of Solomon"", Speculum, 29 (3): 477–487, doi:10.2307/2846790, ISSN 0038-7134, JSTOR 2846790, S2CID 163031682.
- ^ kelinich (2014-03-08). "Maillardet's Automaton". The Franklin Institute. Retrieved 2023-08-24.
- ^ Bergen, Nathan; Huang, Angela (2023). "A BRIEF HISTORY OF GENERATIVE AI" (PDF). Dichotomies: Generative AI: Navigating Towards a Better Future (2): 4.
- ^ Gagniuc, Paul A. (2017). Markov Chains: From Theory to Implementation and Experimentation. USA, NJ: John Wiley & Sons. pp. 2–8. ISBN 978-1-119-38755-8.
- ^ Charles Miller Grinstead; James Laurie Snell (1997). Introduction to Probability. American Mathematical Soc. pp. 464–466. ISBN 978-0-8218-0749-1.
- ^ Pierre Bremaud (9 March 2013). Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues. Springer Science & Business Media. p. ix. ISBN 978-1-4757-3124-8. Archived from the original on 23 March 2017.
- ^ Hayes, Brian (2013). "First Links in the Markov Chain". American Scientist. 101 (2): 92. doi:10.1511/2013.101.92. ISSN 0003-0996.
- ^ Fine, Shai; Singer, Yoram; Tishby, Naftali (1998-07-01). "The Hierarchical Hidden Markov Model: Analysis and Applications". Machine Learning. 32 (1): 41–62. doi:10.1023/A:1007469218079. ISSN 1573-0565. S2CID 3465810.
- ^ Tony Jebara (2012). Machine learning: discriminative and generative. Vol. 755. Springer Science & Business Media.
- ^ Cao, Yihan; Li, Siyu; Liu, Yixin; Yan, Zhiling; Dai, Yutong; Yu, Philip S.; Sun, Lichao (2023-03-07). "A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT". arXiv:2303.04226 [cs.AI].
- ^ "finetune-transformer-lm". GitHub. Retrieved 2023-05-19.
- ^ Radford, Alec; Wu, Jeffrey; Child, Rewon; Luan, David; Amodei, Dario; Sutskever, Ilya; others (2019). "Language models are unsupervised multitask learners". OpenAI Blog. 1 (8): 9.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric; Kamar, Ece; Lee, Peter; Lee, Yin Tat; Li, Yuanzhi; Lundberg, Scott; Nori, Harsha; Palangi, Hamid; Ribeiro, Marco Tulio; Zhang, Yi (March 22, 2023). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv:2303.12712 [cs.CL].
- ^ Schlagwein, Daniel; Willcocks, Leslie (September 13, 2023). "ChatGPT et al: The Ethics of Using (Generative) Artificial Intelligence in Research and Science". Journal of Information Technology. 38 (2): 232–238. doi:10.1177/02683962231200411. S2CID 261753752.
- ^ "People Are Still Terrible: AI Voice-Cloning Tool Misused for Deepfake Celeb Clips". PCMag Middle East. 2023-01-31. Retrieved 2023-07-25.
- ^ "The generative A.I. software race has begun". Fortune. Retrieved 2023-02-03.
- ^ Milmo, Dan; Hern, Alex (2023-05-20). "Elections in UK and US at risk from AI-driven disinformation, say experts". The Guardian. ISSN 0261-3077. Retrieved 2023-07-25.
- ^ "Seeing is believing? Global scramble to tackle deepfakes". news.yahoo.com. Retrieved 2023-02-03.
- ^ Vincent, James (January 31, 2023). "4chan users embrace AI voice clone tool to generate celebrity hatespeech". The Verge. Retrieved 2023-02-03.
- ^ Thompson, Stuart A. (2023-03-12). "Making Deepfakes Gets Cheaper and Easier Thanks to A.I." The New York Times. ISSN 0362-4331. Retrieved 2023-07-25.
- ^ "A new AI voice tool is already being abused to make deepfake celebrity audio clips". Engadget. January 31, 2023. Retrieved 2023-02-03.
- ^ Gee, Andre (2023-04-20). "Just Because AI-Generated Rap Songs Go Viral Doesn't Mean They're Good". Rolling Stone. Retrieved 2023-12-06.
- ^ Coscarelli, Joe (April 19, 2023). "An A.I. Hit of Fake 'Drake' and 'The Weeknd' Rattles the Music World". The New York Times. Retrieved December 5, 2023.
{{cite news}}
: CS1 maint: url-status (link) - ^ Lippiello, Emily; Smith, Nathan; Pereira, Ivan (November 3, 2023). "AI songs that mimic popular artists raising alarms in the music industry". ABC News. Retrieved 2023-12-06.
- ^ Skelton, Eric. "Fans Are Using Artificial Intelligence to Turn Rap Snippets Into Full Songs". Complex. Retrieved 2023-12-06.
- ^ Marr, Bernard. "Virtual Influencer Noonoouri Lands Record Deal: Is She The Future Of Music?". Forbes. Retrieved 2023-12-06.
- ^ Thaler, Shannon (2023-09-08). "Warner Music signs first-ever record deal with AI pop star". New York Post. Retrieved 2023-12-06.
- ^ Sjouwerman, Stu (2022-12-26). "Deepfakes: Get ready for phishing 2.0". Fast Company. Retrieved 2023-07-31.
- ^ Sonnemaker, Tyler. "As social media platforms brace for the incoming wave of deepfakes, Google's former 'fraud czar' predicts the biggest danger is that deepfakes will eventually become boring". Business Insider. Retrieved 2023-07-31.
- ^ Collinson, Patrick (2023-07-15). "Fake reviews: can we trust what we read online as use of AI explodes?". The Guardian. ISSN 0261-3077. Retrieved 2023-12-06.
- ^ "After WormGPT, FraudGPT Emerges to Help Scammers Steal Your Data". PCMAG. Retrieved 2023-07-31.
- ^ Gupta, Maanak; Akiri, Charankumar; Aryal, Kshitiz; Parker, Eli; Praharaj, Lopamudra (2023). "From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy". IEEE Access. 11: 80218–80245. Bibcode:2023IEEEA..1180218G. doi:10.1109/ACCESS.2023.3300381. S2CID 259316122. Retrieved 2023-11-17.