The idea that adversarial examples in machine learning are not flaws but essential components has intrigued me ever since I read Gabriel Goh's commentary on the original paper, "Adversarial Examples Are Not Bugs, They Are Features." As someone who has built businesses in diverse fields ranging from deep tech to education, I instinctively approach such concepts from a layer deeper: how does this idea reshape predictive decision-making, and how should entrepreneurs react to it?
Let’s break it down. When machine learning models are attacked with adversarial examples, small, targeted modifications that confuse the system into misclassifying, they’re doing something unexpected but not irrational. These models are leveraging features hidden in data that humans don’t perceive but machines find highly predictive. These features are effective but fragile, and that's where things get fascinating.
Insights Entrepreneurs Should Draw from This
-
"Blind Spots" Aren’t Always Bad
From my experience building startups and machine-learning-enabled tools, blind spots often mark opportunities. The so-called "non-robust features" that machines pick up on are clues that the dataset itself has richer patterns than initially understood. Think about this through the lens of your product: data misclassifications, quirks in user behavior, or patterns you haven’t explained yet might actually be signals waiting to be decoded. -
Transferability Isn’t Always a Metric of Strength
The most surprising takeaway from Goh's article was how robust feature leakage differs from non-robust features when high performance suddenly transfers to unrelated datasets or challenges. Entrepreneurs often rush to generalize success: you have one successful niche product, so why not replicate it in other fields? But as this phenomenon demonstrates, not all strengths should be relied on universally, sometimes, what works well in one niche is counterproductive or irrelevant elsewhere. -
Adversarial Training Reveals What You’ve Been Missing
Just as training models with adversarial attacks exposes non-robust or hidden features, applying counterintuitive strategies rigorously to your business (for example, focusing on non-core customer groups or A/B testing "ridiculous" ideas) can unearth which parts of your offering hold true, regardless of context, and which don't.
What the Data Tells Us About This Concept
Gabriel Goh cites examples from experimental datasets, revealing the specific impact of adversarial training. On datasets with randomized labels (( D_{text{rand}} )), models still achieved 23.5% correct classifications, suggesting that robust features had been unintentionally preserved. This outcome highlights how assumptions in a dataset can bleed into new circumstances. As entrepreneurs, analyze your results similarly. When your product unexpectedly works in new conditions, are you benefiting from a tangible asset, or is this result a "leak" from adjacent efforts that might lack durability?
In deterministic datasets (( D_{text{det}} )), this leakage largely disappears, leaving non-robust features to do the heavy lifting in generalizations. For teams or businesses refining their playbooks, this shows how "clean" testing environments ensure you're separating assumptions baked into your earlier processes from genuine, scalable features in your strategy.
How to Apply the Lesson of Adversarial Examples in Business
-
Build Stress Tests Into Decisions
Much like adversarial training sharpens a machine-learning model, subject your business plans to intentional challenges. One way I’ve done this is by creating testing battlegrounds where your team must operate purely on data they don’t yet fully understand or trust. This will reveal which "features" of your strategy support stability versus short-term wins. -
Focus on Fundamentals Without Sacrificing Detail
The adversarial examples paper highlights the danger of conflating durability with a model's ability to succeed on unexpected attacks. For entrepreneurs, this could mean ironing out the foundational success metrics of your business (repeatable revenue streams, clear IP ownership, measurable value for customers) while being nimble with smaller, perhaps disposable opportunities. -
Embrace the Quirks in User Behavior
Everything in AI reminds me of product design at some level. An adversarial example uses quirks the creator didn't foresee but the machine capitalizes on. If your customers are using your product in unpredictable and unintended ways, capitalize on it. Refine the experience around patterns you didn’t expect to matter initially.
What Most Founders Get Wrong
-
Assuming Predictive Models Learn "Truths": Many founders rely on AI for idealized predictions, expecting the results to mirror human reasoning. But machines don’t learn truths, they optimize patterns, and non-robust patterns especially may skew results. Don't trust every pattern blindly, even when it offers short-term utility.
-
Ignoring External Noise: Noise, or seemingly unimportant data, is often disregarded in building products. As the paper suggests, non-robust features are highly predictive but fragile. Build your datasets and business experiments knowing they’re filled with such fragile but useful layers.
-
Demanding Transferable Success Early On: Even when a strategy works with one customer demographic, founders expect that success to readily apply elsewhere. However, as ( D_{text{rand}} ) demonstrates, generalizations can be a mix of strong foundational elements and accidents.
Final Thoughts for Entrepreneurs
What resonated most with me in Goh's follow-up article was how “faint robust cues in attacks” can reshape predictions in subtle, profound ways. For startups, this is equivalent to unexpected insights pulled from seemingly chaotic data channels, testimonials, edge cases, early adopters. These insights often turn into cornerstones of true differentiation, as long as you don’t treat them as noise or novelties.
Models learn from the data we provide, warts and all. Your startup or product will too. The critical question is: are you paying attention to the quirks that machines, and your business, uniquely uncover?
FAQ
1. What are adversarial examples in machine learning?
Adversarial examples are small, targeted modifications in data that confuse machine learning models into misclassifying information. They reveal that models utilize hidden, non-robust features that are highly predictive but fragile. Read "Adversarial Examples Are Not Bugs, They Are Features"
2. Why are adversarial examples considered features and not bugs?
According to Gabriel Goh and Ilyas et al., adversarial examples leverage patterns in data that are non-robust but effective predictors, implying that adversarial behavior is an intrinsic part of the model rather than a flaw. Discover Goh's discussion on Distill
3. How do adversarial examples influence dataset design in experiments?
Adversarial examples challenge machine learning researchers to construct datasets carefully, ensuring robustness against misleading feature leakage (e.g., ( D_{text{rand}} ) and ( D_{text{det}} )). See insights on dataset construction
4. What is robust feature leakage in adversarial examples?
Robust feature leakage occurs when robust cues unintentionally transfer from adversarial examples, contributing to model accuracy in unseen scenarios. This is noticeable in randomized datasets like ( D_{text{rand}} ). Explore robust feature leakage
5. How much accuracy in adversarially trained models can be attributed to robust features?
Robust features can explain at least 23.5% of accuracy in clean test data from adversarial examples, based on linear feature analysis in the study. Learn about the experimental analysis
6. How can non-robust features help models generalize to clean datasets?
Non-robust features, derived from high-frequency patterns invisible to humans, are predictive but fragile, enabling generalization under specific adversarial setups. Read the original paper on adversarial examples
7. How can adversarial training benefit entrepreneurs?
Adversarial training can reveal hidden patterns in customer behavior or unexpected product usage, allowing businesses to refine and stress-test their strategies. Check out commentary for entrepreneurs
8. What's the difference between ( D_{text{rand}} ) and ( D_{text{det}} ) datasets in adversarial studies?
( D_{text{rand}} ): Labels are randomized, allowing robust cue alignment with new labels.
( D_{text{det}} ): Labels are deterministic, preventing robust cue misalignment and isolating non-robust features. Explore details on dataset insights
9. How do adversarial examples challenge predictive decision-making?
They demonstrate that predictive models optimize for patterns, not truths, suggesting the need to embrace their limitations and recalibrate decisions accordingly. Read how entrepreneurs can react to adversarial examples
10. Can adversarial phenomena extend to other fields beyond machine learning?
Yes, the quirks revealed by adversarial attacks in AI parallel unexpected insights in business, product design, and customer behavior analysis. Learn from Goh's commentary
About the Author
Violetta Bonenkamp, also known as MeanCEO, is an experienced startup founder with an impressive educational background including an MBA and four other higher education degrees. She has over 20 years of work experience across multiple countries, including 5 years as a solopreneur and serial entrepreneur. Throughout her startup experience she has applied for multiple startup grants at the EU level, in the Netherlands and Malta, and her startups received quite a few of those. She’s been living, studying and working in many countries around the globe and her extensive multicultural experience has influenced her immensely.
Violetta Bonenkamp's expertise in CAD sector, IP protection and blockchain
Violetta Bonenkamp is recognized as a multidisciplinary expert with significant achievements in the CAD sector, intellectual property (IP) protection, and blockchain technology.
CAD Sector:
- Violetta is the CEO and co-founder of CADChain, a deep tech startup focused on developing IP management software specifically for CAD (Computer-Aided Design) data. CADChain addresses the lack of industry standards for CAD data protection and sharing, using innovative technology to secure and manage design data.
- She has led the company since its inception in 2018, overseeing R&D, PR, and business development, and driving the creation of products for platforms such as Autodesk Inventor, Blender, and SolidWorks.
- Her leadership has been instrumental in scaling CADChain from a small team to a significant player in the deeptech space, with a diverse, international team.
IP Protection:
- Violetta has built deep expertise in intellectual property, combining academic training with practical startup experience. She has taken specialized courses in IP from institutions like WIPO and the EU IPO.
- She is known for sharing actionable strategies for startup IP protection, leveraging both legal and technological approaches, and has published guides and content on this topic for the entrepreneurial community.
- Her work at CADChain directly addresses the need for robust IP protection in the engineering and design industries, integrating cybersecurity and compliance measures to safeguard digital assets.
Blockchain:
- Violetta’s entry into the blockchain sector began with the founding of CADChain, which uses blockchain as a core technology for securing and managing CAD data.
- She holds several certifications in blockchain and has participated in major hackathons and policy forums, such as the OECD Global Blockchain Policy Forum.
- Her expertise extends to applying blockchain for IP management, ensuring data integrity, traceability, and secure sharing in the CAD industry.
Violetta is a true multiple specialist who has built expertise in Linguistics, Education, Business Management, Blockchain, Entrepreneurship, Intellectual Property, Game Design, AI, SEO, Digital Marketing, cyber security and zero code automations. Her extensive educational journey includes a Master of Arts in Linguistics and Education, an Advanced Master in Linguistics from Belgium (2006-2007), an MBA from Blekinge Institute of Technology in Sweden (2006-2008), and an Erasmus Mundus joint program European Master of Higher Education from universities in Norway, Finland, and Portugal (2009).
She is the founder of Fe/male Switch, a startup game that encourages women to enter STEM fields, and also leads CADChain, and multiple other projects like the Directory of 1,000 Startup Cities with a proprietary MeanCEO Index that ranks cities for female entrepreneurs. Violetta created the "gamepreneurship" methodology, which forms the scientific basis of her startup game. She also builds a lot of SEO tools for startups. Her achievements include being named one of the top 100 women in Europe by EU Startups in 2022 and being nominated for Impact Person of the year at the Dutch Blockchain Week. She is an author with Sifted and a speaker at different Universities. Recently she published a book on Startup Idea Validation the right way: from zero to first customers and beyond, launched a Directory of 1,500+ websites for startups to list themselves in order to gain traction and build backlinks and is building MELA AI to help local restaurants in Malta get more visibility online.
For the past several years Violetta has been living between the Netherlands and Malta, while also regularly traveling to different destinations around the globe, usually due to her entrepreneurial activities. This has led her to start writing about different locations and amenities from the POV of an entrepreneur. Here’s her recent article about the best hotels in Italy to work from.

