Two classes of AI systems contributing to current AI success stories — and to much of the hype about future applications — are generative AI and discriminative AI.

  • Generative AI systems create things, such as pictures, audio, writing samples and anything that can be built with computer-controlled systems, such as 3D printers.
  • Discriminative systems identify things such as people in pictures, words in speech or handwriting and what’s real versus what’s fake.

Both are based on neural network models, are able to generate output in response to input data, and to modify their own internal structures to change what output is generated based on feedback on how good the preceding answer was. They start with little or no built-in knowledge and are trained using large volumes of data.

The most visible type of AI applications — things like ChatGPT, sometimes referred to as generative AI models — combine these two classes of AI in what’s called a generative adversarial network (GAN) model. Each type of AI in the GAN helps train — improve the performance of — the other, resulting in a powerful machine-learning model.

For example, a GAN for creating realistic yet fake yearbook photos might use a generative model to synthesize human faces that the developers then pass along with real photos through a discriminative model to see if it can tell which are fake and which are real. The exercise trains both models. The discriminator gets better at identifying fakes, as it’s told which images were created by the generator. The generator gets better at creating realistic photos, as it’s told which fakes the discriminator successfully identified.

Generative AI examples show success in many industries

Pharmaceuticals. Pharmaceutical companies — including Amgen, Insilico Medicine and others — and academic researchers are working with generative AI in areas such as designing proteins for medicines. Predicting the folding of proteins has been an enormous challenge for geneticists and pharmaceutical developers for decades. GANs are increasing researchers’ abilities to understand and use protein synthesis.

Diagram of GAN training method
Generative adversarial network basics.

Genetics research. AI is also contributing to genetics research. Geneticists are learning to understand gene expression — how specific genes and combinations of genes get turned on and off — and what genes do when they’re active. AI is also helping researchers predict how a gene expression will change in response to specific changes in the genes. This shows enormous promise for the development of gene therapies. It also optimizes treatments by predicting which medicines a person’s genetics will best respond to.

Manufacturing. In manufacturing, Autodesk, Creo and other products use generative AI to design physical objects. In some cases, they also create those objects through 3D printing or computer-controlled machining and additive manufacturing. Generative AI can create machine parts and sub-assemblies of larger objects, for example, and can sometimes optimize designs for the following aspects of the manufacturing process: materials efficiency (minimizing waste), simplicity (fewest parts) and speed of production.

Entertainment. ChatGPT, Dall-E and other tools are already employed in generating conceptual art to guide scenario and environment development and are expected to be used to generate full environments in the future. Generative AI tools are also taking up background music generation for games. It’s worth noting that artists and corporations are filing a flurry of lawsuits based on copyright infringement and intellectual property theft, arguing that the use of their protected IP in the training data, coupled with the ability to request output in a particular person’s style, equate to unfair use and violations of copyright. This kind of legal challenge is slowing the use of generative tools in some contexts.

Other applications of generative AI across industries

Text synthesis. GANs trained to produce text saw a major evolutionary leap in 2022 with the release of ChatGPT-3. The third version of OpenAI’s application for generating text in response to text prompts — a chatbot — was a marked improvement to previous iterations, able to generate text answers good enough to fool human readers far more often and for far longer than previous versions. Users have employed it and tools like it, such as BLOOM, Flamingo, Jasper and many more of what are sometimes also called large language models, for a variety of content-generation tasks: to generate office memos, meeting minutes, starter code, condolence cards and, of course, school essays.

The text these tools generate is, often, only surface-plausible though — syntactically correct but semantically empty or even self-contradictory. Such tools are giving you “not information but information-shaped sentences,” as author Neil Gaiman put it.

The training data for these systems includes enormous amounts of poorly written, poorly constructed and factually inaccurate prose. Also, in response to a request for information on a subject, such programs seem willing and able to make up answers, complete with fabricated supporting articles (hallucitations) from real or made-up authorities. This leads to well-justified hesitation to use AI in life-or-death or other high-stakes situations, including complex application coding.

Text-to-speech. Whether its own words or someone else’s, generative AI is also advancing speech synthesis, improving the quality of artificial readers for e-books, synthetic presenters for news clips and advertising — including click-bait posts on social media — synthetic characters in video games and literal chatbots that answer phone calls. Intonation, cadence and volume variations are all becoming more realistic, subtle and flexible. As with image synthesis, this improvement in quality is also increasing the threat of deepfake audio.

Image synthesis. OpenAI’s Dall-E 2 and other products — Midjourney, Deep Dream Generator, Big Sleep, etc. — use AI to create pictures based on text descriptions. If you tell one to create a ridiculous picture of 14 lemmings and a talking cantaloupe wearing a trench coat and pretending to be a private investigator, it will do so. Dall-E and its many competitors have taken a huge leap forward, in both their image quality and their ability to translate arbitrary text into images. For example, in a few months they overcame severe shortcomings, such as an inability to generate realistic human hands. Such systems are finding their way into advertising, product design, set design, film and other industries.

They are also showing potential as engines of misinformation and disinformation, as they can generate deepfake images of events that never happened or alter images of events that did happen. For example, Midjourney users generated and circulated in social media images of Pope Francis in a big, puffy white coat, Elon Musk hobnobbing with Alexandria Ocasio-Cortez and Donald Trump being dragged away by police.

Space synthesis. As is the case with images, this kind of synthesis can occur with 3D spaces and objects, both real and digital. On the real-world side, applications such as Autodesk or Spacemaker can help design buildings and the spaces in them or urban landscapes incorporating built and natural elements. In these situations, AI supplements human designers’ work by filling in missing details or proposing solutions to fit specific code requirements or space and material constraints. Many companies — most notably Meta and all the major game creators — are developing applications to generate virtual spaces for game designs. These AI systems can constantly generate new spaces and possibly even make them infinitely expandable.

Future generative AI examples

Although there’s no way to predict which generative AI examples and use cases show the most promise for the future, there are some, such as image generation and speech synthesis, that have shown enormous progress in the last few years. Other areas, such as medicine and manufacturing, have also proven enormously promising and show the wide range of fields that AI might contribute to. Progress in physical use cases appears slower, which makes sense given the inherent limits imposed by manipulating matter instead of data.

Such progress builds on itself, a dynamic on full display in 2022 and 2023. As the base tools become cheaper, more widely available and easier to use, the pool of people harnessing those tools broadens. This increases the number and type of situations those tools get trained to deal with, further accelerating the pace of change.

Incorporating generative AI into other AI-powered tool suites can turn them into a more powerful gestalt. For example, current code- and documentation-generation systems aren’t great, but as they improve and are combined with other kinds of AI systems already in place — for detecting coding errors, common security flaws and the use of licensed code in unlicensed ways, for example — the developer tool set will become more powerful and productive.

What’s important to remember is that there are antisocial and dangerous applications of AI that will also become easier in the same ways.


By admin

Leave a Reply

Your email address will not be published. Required fields are marked *