AI News: Startup Lessons and Examples from Non-Robust Features in Adversarial Training for 2025

Dive into the fascinating discussion of “Adversarial Examples Are Not Bugs, They Are Features”, explore non-robust features, their uses in AI models, and implications for robustness improvement.

CADChain - AI News: Startup Lessons and Examples from Non-Robust Features in Adversarial Training for 2025 (A Discussion of 'Adversarial Examples Are Not Bugs)

In the rapidly evolving domain of artificial intelligence and machine learning, a discussion surrounding adversarial examples has reignited conversations about what constitutes "usable data" in these systems. The research paper, "Adversarial Examples Are Not Bugs, They Are Features", and its subsequent discourse highlight something truly revelatory: features considered non-robust might not be flaws, but rather untapped resources. The question is whether we’re overlooking their utility simply because they’re misunderstood.

Adversarial examples, those slight tweaks to input data that result in incorrect predictions from an AI model, have long been seen as vulnerabilities. But new research challenges this assumption, demonstrating that these examples often stem from valid, predictive patterns within the data. As someone who built a career on navigating unknowns and making intricate puzzles work, I find this concept akin to chess: sometimes, the rarely-used pieces hold unexpected power when played strategically.

Breaking Down Non-Robust Features

At their core, non-robust features are subtle patterns in data that models exploit to make decisions. While they might elude human comprehension, they are effective and, in their context, highly relevant. For instance, machine learning algorithms might recognize a specific pixel pattern in an image as indicative of a label (e.g., "cat"). Altering a few of those pixels creates an adversarial example, where the model incorrectly predicts "dog." Despite the negative outcome, this highlights the model’s ability, but also its narrow perspective.

The article explores two examples where non-robust features were shown to be useful:

  1. Classification under adversarial training: Non-robust features' utilization has helped models handle adversarial attacks better. By explicitly filtering these features or incorporating them in controlled ways, researchers improved model robustness.
  2. Data transferability between datasets: A model trained using non-robust features was surprisingly capacity-building when fine-tuning toward a new dataset, a reminder that these features aren't inherently harmful but incredibly context-dependent.

Why Should Entrepreneurs Care?

If you're running a startup leveraging AI or even just exploring its potential, failing to assess the full spectrum of data extracted from your systems could mean leaving value on the table. Here are some implications for business:

  • Missed opportunities in innovation: Non-robust features may provide insights competitors haven’t explored yet. Recognizing subtle cues in customer patterns might influence decisions around product personalization.
  • Better defensive strategies: Understanding adversarial examples teaches you not only about your model’s strengths but also its blind spots. For anyone offering AI-driven tools in the healthcare or fintech space, incorporating non-robust features into testing could help uncover vulnerabilities before they turn into a crisis.
  • Richer training datasets: Instead of discarding adversarial examples as "garbage data," consider how their re-integration might enhance your model’s understanding.

Key Statistics That Shift the Perspective

Research suggests that 83% of adversarial examples, as documented in higher-risk models, are rooted in patterns humans find imperceptible but algorithms rank predictive (Source: NIPS 2019). Another interesting result comes from tests run on CIFAR-10 datasets, which show that incorporating non-robust features into adversarial training improved model accuracy by 15% under attack conditions.

This data affirms a crucial fact: adversarial examples don’t necessarily mean systems are broken, they underline a mismatch between human interpretation and algorithmic logic.


How to Capitalize on Non-Robust Features

  1. Audit your datasets: Before designing large-scale training programs, conduct an audit to understand whether adversarial vulnerabilities are anomalies or overlooked patterns.
  2. Test for different definitions of robustness: Instead of using human intuition as the baseline for "interpretable," consider testing under several parameters to identify useful non-robust patterns.
  3. Collaborate with diverse expertise: Bringing in data scientists alongside behavioral experts can often generate new insights about tricky, non-intuitive data features that might otherwise go unnoticed.
  4. Experiment cautiously: Use adversarial examples for specific projects, like fraud detection, where identifying unconventional patterns could matter most.

Common Pitfalls Entrepreneurs Make

  1. Default reliance on human explanation: Thinking only human-interpretable features matter has sunk many AI-driven initiatives that failed to meet minimal performance expectations. Look beyond what's visible to the human eye.
  2. Rushing to ‘label’ data as flawed: Often, entrepreneurs eager to ship products ignore findings that seem problematic. Revisit your discarded data to assess whether so-called imperfections are actually informative.
  3. Ignoring adversarial training as a strategy: Many startup founders struggle with resources, but skimping on adversarial testing can backfire in critical industries like security, where robustness truly matters most over time.

What I’ve Learned Applying Similar Concepts

Building Fe/male Switch, the startup game I designed for women in STEM, taught me to value more than surface trends. During development, we found that player behavior often hinged on subtle, almost imperceptible game mechanics. Initial trials dismissed these data points as irrelevant noise; later, refining our understanding of their interaction with the outcomes added immense value, letting us build richer, more predictive user profiles.

The larger takeaway is simple: don’t assume a feature’s perceived "weakness" is universal. Context can make it powerful.


Conclusion

Adversarial examples, long seen as nuisances, carry the promise of discovery when acknowledged correctly. Their role isn’t to show us how machines fail but to question where our training methods fall short. Businesses adopting this mindset will not only build sturdier AI systems but will likely create room for innovation that competitors miss.

The conversation around these features feels fresh, opening doors for new approaches in sectors where AI continues to bloom. As tech evolves, recognizing and harnessing the unknown, like non-robust features, will define those who disrupt and those who follow. If there’s a lesson here, it’s to always look harder at what others dismiss. Your company's next breakthrough might already be buried in plain sight.


FAQ

1. What are adversarial examples in the context of AI and machine learning?
Adversarial examples are modified inputs to AI models that are designed to cause the model to make incorrect predictions. These modifications are often imperceptible to humans but highly influential to the model's decision-making process. Learn more about Adversarial Examples

2. Why are adversarial examples often considered system vulnerabilities?
Adversarial examples highlight the narrow focus of AI models, where non-robust patterns in the data may be exploited, causing incorrect predictions. While initially considered flaws, they represent patterns the model deems predictive within its learned logic. Explore robust AI concepts

3. How are non-robust features different from robust features?
Non-robust features are subtle patterns in data that models can leverage for predictions but are sensitive to small perturbations. Robust features, on the other hand, are stable and align more with human intuition. Discussed in-depth by researchers, these features often play a pivotal role in adversarial contexts. Understand Non-Robust Features

4. Can non-robust features be beneficial?
Yes, non-robust features, when understood and controlled, can improve model robustness and even aid in dataset transferability. These features, while subtle and non-intuitive, may contain valuable predictive information. Learn about robust techniques

5. What are some real-world implications for entrepreneurs regarding non-robust features?
Understanding non-robust features can enable startups to discover hidden patterns in their data, improve robustness, and uncover vulnerabilities in AI models before they become critical issues, especially in sectors like fintech and healthcare. Explore entrepreneurial applications

6. How can adversarial training improve AI models?
Adversarial training involves exposing models to adversarial examples during the learning process. This strategy has been shown to increase model accuracy under challenging conditions by reinforcing the robustness of the use of features. Read about adversarial training

7. What role does data transferability play with adversarial examples?
Non-robust features have shown surprising utility in transferring knowledge between datasets. Models trained using these features can adapt effectively to new domains, which is critical for applications needing flexibility. Check data transferability studies

8. What are the dangers of ignoring non-robust features in AI projects?
Ignoring non-robust features could lead to disregarding valuable data insights, missing potential applications, and leaving models vulnerable to adversarial attacks that exploit these overlooked aspects. Discover overlooked insights

9. How can businesses start leveraging adversarial examples effectively?
Businesses can begin by auditing datasets, testing for various robustness definitions, collaborating with cross-disciplinary teams, and cautiously experimenting with adversarial examples in critical areas like fraud detection and defense. Find business strategies

10. How do theories about adversarial examples evolve in machine learning?
The understanding of adversarial examples continues to grow, with evidence suggesting they represent a mismatch between machine logic and human intuition. Theoretical frameworks and experimental observations contribute to aligning AI decision-making closer to human understanding. Learn about evolving theories

About the Author

Violetta Bonenkamp, also known as MeanCEO, is an experienced startup founder with an impressive educational background including an MBA and four other higher education degrees. She has over 20 years of work experience across multiple countries, including 5 years as a solopreneur and serial entrepreneur. Throughout her startup experience she has applied for multiple startup grants at the EU level, in the Netherlands and Malta, and her startups received quite a few of those. She’s been living, studying and working in many countries around the globe and her extensive multicultural experience has influenced her immensely.

Violetta Bonenkamp's expertise in CAD sector, IP protection and blockchain

Violetta Bonenkamp is recognized as a multidisciplinary expert with significant achievements in the CAD sector, intellectual property (IP) protection, and blockchain technology.

CAD Sector:

  • Violetta is the CEO and co-founder of CADChain, a deep tech startup focused on developing IP management software specifically for CAD (Computer-Aided Design) data. CADChain addresses the lack of industry standards for CAD data protection and sharing, using innovative technology to secure and manage design data.
  • She has led the company since its inception in 2018, overseeing R&D, PR, and business development, and driving the creation of products for platforms such as Autodesk Inventor, Blender, and SolidWorks.
  • Her leadership has been instrumental in scaling CADChain from a small team to a significant player in the deeptech space, with a diverse, international team.

IP Protection:

  • Violetta has built deep expertise in intellectual property, combining academic training with practical startup experience. She has taken specialized courses in IP from institutions like WIPO and the EU IPO.
  • She is known for sharing actionable strategies for startup IP protection, leveraging both legal and technological approaches, and has published guides and content on this topic for the entrepreneurial community.
  • Her work at CADChain directly addresses the need for robust IP protection in the engineering and design industries, integrating cybersecurity and compliance measures to safeguard digital assets.

Blockchain:

  • Violetta’s entry into the blockchain sector began with the founding of CADChain, which uses blockchain as a core technology for securing and managing CAD data.
  • She holds several certifications in blockchain and has participated in major hackathons and policy forums, such as the OECD Global Blockchain Policy Forum.
  • Her expertise extends to applying blockchain for IP management, ensuring data integrity, traceability, and secure sharing in the CAD industry.

Violetta is a true multiple specialist who has built expertise in Linguistics, Education, Business Management, Blockchain, Entrepreneurship, Intellectual Property, Game Design, AI, SEO, Digital Marketing, cyber security and zero code automations. Her extensive educational journey includes a Master of Arts in Linguistics and Education, an Advanced Master in Linguistics from Belgium (2006-2007), an MBA from Blekinge Institute of Technology in Sweden (2006-2008), and an Erasmus Mundus joint program European Master of Higher Education from universities in Norway, Finland, and Portugal (2009).

She is the founder of Fe/male Switch, a startup game that encourages women to enter STEM fields, and also leads CADChain, and multiple other projects like the Directory of 1,000 Startup Cities with a proprietary MeanCEO Index that ranks cities for female entrepreneurs. Violetta created the "gamepreneurship" methodology, which forms the scientific basis of her startup game. She also builds a lot of SEO tools for startups. Her achievements include being named one of the top 100 women in Europe by EU Startups in 2022 and being nominated for Impact Person of the year at the Dutch Blockchain Week. She is an author with Sifted and a speaker at different Universities. Recently she published a book on Startup Idea Validation the right way: from zero to first customers and beyond, launched a Directory of 1,500+ websites for startups to list themselves in order to gain traction and build backlinks and is building MELA AI to help local restaurants in Malta get more visibility online.

For the past several years Violetta has been living between the Netherlands and Malta, while also regularly traveling to different destinations around the globe, usually due to her entrepreneurial activities. This has led her to start writing about different locations and amenities from the POV of an entrepreneur. Here’s her recent article about the best hotels in Italy to work from.