Welcome to Degenerative AI
Can AI vs AI be a viable idea?
Recently, I attended a meet-up of people in the field of AI and ID. It was interesting to see the progress of what is being created (wallets and tokens anyone?) but I was having a hard time explaining what I do or why I was there beside basic curiosity. For me, AI is a new area that I tried to master over the past 10 months and I have been mostly educating others by over simplifying complex concepts and/or helping entrepreneurs with introductions and potential angel funding in the AI area. But, as a side, I have been joking with acquaintances that I was working on everything but generative AI.
So I bluntly said “My interest is in degenerative AI”. Few of the attendees thought it was funny and the greatest approach ever but I had provided them with some context before dropping that line.
Let me give you some of the discussion points shared before going into various views on such topic.
Interestingly, there is no real degenerative AI concept out there. Ask ChatGPT and you will get nothing but the potential split definition below derived from both words.
“If we break down the term, "degenerative" generally refers to something that is deteriorating or declining over time. In the context of AI, it could hypothetically refer to an AI system that degrades in performance over time, perhaps due to factors like outdated training data, changes in the problem space or environment, or lack of maintenance and updates.”
There was a recent reference using the words “degenerative AI” in Cosmos Magazine but heck, I have been using that joke since September 2022 so I’ll claim the coining of it.
And the article was looking at Degenerative AI very differently. Mostly in a way of saturation of the system resulting in its collapse. For more details, check the article
Also, is Degenerative AI the true opposite of Generative AI? Technically no, as generative modeling does the opposite of discriminative modeling, predicting and generating features given a certain categorization.
Well, that’s great but what do I mean by degenerative AI? Another great question, thank you for asking.
In my mind, degenerative AI is using AI against AI. The way I explain it is that AI will generate hallucinations once in a while. We, as humans, know that the human race does not have 3 eyes, 5 legs, 40 fingers and nose below the mouth. We also know what it looks like to bend a limb in the right way. AI does not know that and we all know the example of badly generated AI images with very recognizable errors.
Now, as a human, I can tell the AI model that what was generated was bad and therefore not an acceptable result. Then training the model on its mistakes (learn from hallucinations human input aka, a human should have 2 eyes above the nose. The mouth is on the face below the nose. We have 2 arms in the upper body with 5 fingers on each hand and 2 legs with 5 toes on each foot on the lower body part. Everything else, like what was generated prior, is therefore considered incorrect.
Let’s unleash that new acquired knowledge by AI against maybe image databases, video content, transactions logs or why not an infrastructure network and let’s instruct the AI engine to go and try to find other “hallucinations” in these.
Technically, I am using AI to detect anomalies by learning from its errors and report them. From there, when the AI picks up and isolates the anomalies, the human can enter the picture and review these.
We used an example in our discussion of fingerprint templates for consumer biometrics authentication. Maybe we trained the AI to know that all fingerprints are made of curves going into concentring irregular circles. AI could found this one image, isolated it because it would have this big straight line across the template. Definitely not normal. It could indeed be a fake generated template BUT it could also be someone that got a big scar due to a prior injury. As a human, I may have that knowledge of that person or info and the capacity to recognize it is not an anomaly.
Imagine expanding and scaling that concept across various systems and being able to tag “AI generated content, Human checked”.
Recommended by LinkedIn
When asking AI to help with AI vs AI scenarios, it was quite interesting to see it didn’t want to go into any doom cases. Very mild ones.
Here what was the answer:
[AI Generated] There are many scenarios where AI systems can be pitted against each other. Here are a few hypothetical examples:
Remember, while these competitions can be interesting and informative, they also raise important ethical and safety considerations, especially when AI systems are tasked with real-world decisions that could impact human lives or personal data.
[End of AI content]
The 2 examples I really like are:
But if hallucinations could be used as a training tool to fight other hallucinations and anomalies, is there a way to mitigate AI Hallucination or eliminate them completely, rendering my degenerative AI concept obsolete? Well, the idea is to generate better hallucinations, ones that are guided and trained. Like an AI dream controlled by humans. So how do we get there?
AI hallucination, also known as over-interpretation, is a phenomenon where an AI system generates or perceives features in data that aren't actually present. This can happen when an AI model is overfitted or not properly trained. Here are some strategies to mitigate AI hallucination:
The best approach often involves a combination of these strategies, tailored to the specific AI task and data. The idea to also get models that are tailored for it (smaller, dedicated models in specific area i.e. SynthAI) will help get better anomalies detection rates.
The interesting part is that even while adding content from AI to the above recommendation, the Human Oversight was not mine. It was AI who suggested it. I thought it was quite interesting and aligned quite well with my “AI Generated, Human checked” concept.
Now, you understand the concept of denegerative AI.
It’s just because it is not generating anything really but learning from its mistakes to correct others thus making less and less of these and therefore in the context of AI, not an AI system that degrades in performance over time as stated at the beginning but as containing less and less of garbage data, hence having clean and smaller models and being able to generate accurate output. Degenerative as in becoming smaller and smaller and finding the right data in dedicated models. Like “honey, I shrunk AI”.
It reminds me of the anecdote I mentioned in my book “The Delivery Man” when the phone manufacturers declared in February of 2007 that users will never use touchscreen on a mobile device. On July 2007, Steve Jobs stood on stage with the iPhone. What did they miss? Well, their data source (the people they surveyed for their opinions) was selected by their trade organizations. They found people who were in the same industry or ecosystem. Like friends and family. It was prone to generate an hallucination and they didn’t realize it. Called it echo chamber, user error, hallucination, there was not corrective actions in place to polish the end result. Their models was small but full of biais. This is the other aspect of degenerative AI, remove these biais by making sure AI can learnt from itself on what are anomalies.
But that’s just my answer to what interests me in AI. What’s yours?