AI News: How to Protect Against Adversarial Examples in 2025 – Startup News and Tips for Entrepreneurs

Explore the interplay of adversarial examples in ML models as features or bugs, with insights on non-robust features, vulnerabilities, and robust learning strategies.

CADChain - AI News: How to Protect Against Adversarial Examples in 2025 - Startup News and Tips for Entrepreneurs (A Discussion of 'Adversarial Examples Are Not Bugs)

In recent years, the debate surrounding adversarial examples has sparked significant interest among machine learning researchers and practitioners. As someone deeply immersed in innovative technologies and startup ecosystems, I’ve seen how the discussions extend beyond the academic world, influencing how entrepreneurs perceive artificial intelligence and its potential vulnerabilities. A common theme revolves around whether adversarial examples should be regarded as bugs, flaws in the design, or as features, inherent aspects of the data that models exploit. Both perspectives are valid in specific contexts, but the implications for startups relying on machine learning are profound.


Getting to Grips with Adversarial Examples in AI

Adversarial examples are data inputs, subtly altered, that cause machine learning models to produce incorrect results. For example, an image of a panda could be slightly modified so that a neural network misclassifies it as a gibbon, showcasing the vulnerability of these models. To put it simply, they exploit the weak points in a system's data interpretation.

A 2019 paper titled "Adversarial Examples Are Not Bugs, They Are Features" argued that adversarial examples arise due to non-robust features in the data. These features are patterns that humans wouldn’t recognize but are highly predictive for the algorithm. Another viewpoint, detailed in the response article "Adversarial Examples are Just Bugs, Too," acknowledges the existence of non-robust features but suggests adversarial examples can also be classified as model-specific flaws or bugs.

As a founder relying on AI-driven tools, understanding this nuance is essential. Knowing what to expect from machine learning models is the first step in building resilient technology solutions.


Are Adversarial Examples "Features" or "Bugs"?

To assess whether adversarial examples are bugs or features, let’s break this down:

  1. Non-Robust Features (Features)
    The original paper posits that adversarial examples exploit subtle patterns in the data distribution. Models are trained to detect these patterns because, mathematically speaking, they often improve accuracy, even if these features appear non-sensical to humans. The issue is that what works mathematically may come at the cost of robustness. A tiny tweak, such as adding imperceptible noise, can weaponize these non-robust features, misleading the model.

  2. Model-Specific Vulnerabilities (Bugs)
    Critics like Preetum Nakkiran argue that some adversarial examples aren’t just due to the data. They are the results of quirks in the model's design and training process. For instance, experiments show that model-specific adversarial examples often don’t transfer well to other models, suggesting these are merely exploitable vulnerabilities, not inherent to the dataset.

  3. Hybrid Reality
    The truth is that adversarial examples are not neatly confined to one category. Both explanations, features and bugs, coexist. And therein lies the challenge: patching one doesn't eliminate the other. A nuanced understanding is critical.


What This Means for Businesses Leveraging AI

For startups or businesses adopting machine learning, the distinction between features and bugs isn’t just academic. It informs risk management, product design, and customer trust. Entrepreneurs running AI-enabled products, from chatbots to fraud detection systems, should consider the following.

  1. Data-Driven Risks Are Inevitable
    Even models trained perfectly on high-quality datasets are not immune to adversarial attacks due to non-robust features. A malicious actor can manipulate the input in ways that evade detection.

  2. Model Bugs Can Be Fixed
    Improving architecture and training methods can mitigate some vulnerabilities. Startups focused on robustness might prioritize ensemble learning or techniques like adversarial training, where models are tested against adversarial examples during training.

  3. Not All Problems Are Solvable Yet
    While adversarial training reduces vulnerabilities, it doesn’t solve the core issue of non-robust features being inherent in data. Startups should temper expectations and maintain transparency with clients.

  4. Human Priorities Influence Outcomes
    Non-robust features emerge because models prioritize accuracy over interpretability or robustness. For businesses, balancing these goals is key, especially when operating in critical sectors like healthcare or finance.


How to Safeguard Your Model Against Adversarial Examples

Here’s a practical approach for businesses looking to address this challenge:

  1. Understand Transferability
    Study whether adversarial examples targeting your model transfer to others. If they do, the issue may relate to non-robust features, which are harder to defend against. If not, you likely face bugs specific to your system.

  2. Invest in Adversarial Training
    Incorporate adversarial examples into your training pipeline to improve robustness. For instance, models trained using Projected Gradient Descent (PGD) show stronger defenses.

  3. Leverage Pretrained Ensembles
    Combining diverse models increases resilience since adversarial examples crafted for one model don’t easily generalize to others.

  4. Monitor in Real Time
    Use anomaly detection tools to flag inputs that deviate significantly from known patterns. Detecting malicious manipulation early is half the battle.

  5. Educate Internal Teams
    Build awareness among your team about the risks associated with adversarial examples and how they impact downstream applications.


Common Mistakes When Handling Adversarial Vulnerabilities

Avoiding missteps can save both time and costs. Here’s what to watch out for:

  • Underestimating Human Oversight
    Some assume that AI can handle everything autonomously. This is untrue in adversarial situations. Human review remains key.

  • Fixating on One Approach
    Solely relying on technical solutions like adversarial training might blindside you to broader risk frameworks.

  • Failing to Test Regularly
    Adversarial examples evolve as attackers find new methods. Regular testing helps maintain an edge.


What Startups Can Learn From This Debate

Adversarial examples teach us that no model operates in isolation from its environment. As I’ve repeatedly seen in building gaming startups like Fe/male Switch or tech-for-good initiatives, flaws and features often coexist, creating both risks and opportunities. Models that exploit non-robust features may outperform others but at the price of security. Entrepreneurs must weigh these trade-offs deliberately.

At the core of innovation lies experimentation combined with responsibility. By acknowledging the challenges posed by adversarial examples and addressing them with multi-pronged strategies, startups can build resilient, trust-worthy products. For a deeper dive into the technical works referenced, take a look at this research on non-robust features. It’s highly praised by those in the field.

Ultimately, understanding how adversarial examples fit into the bigger picture of machine learning isn’t just for researchers. For business leaders, it’s a step towards building AI-driven systems that work not only in theory but in practice too.


FAQ

1. What are adversarial examples in machine learning?
Adversarial examples are inputs specifically designed to deceive machine learning models by exploiting their vulnerabilities, altering results subtly but effectively. Learn more about Adversarial Examples

2. Are adversarial examples considered bugs or features?
They can be both. Adversarial examples arise due to non-robust features inherently present in the data or from model-specific vulnerabilities often seen as bugs. Explore the Features vs Bugs Debate

3. What do non-robust features mean in this context?
Non-robust features are patterns in the data that are predictive for models but incomprehensible to humans and are sensitive to small perturbations. Understand Non-Robust Features

4. Can adversarial examples be mitigated during training?
Yes, adversarial training techniques, like using Projected Gradient Descent (PGD), can incorporate adversarial examples in the training pipeline, improving robustness. Learn more about Adversarial Training

5. How do adversarial examples impact businesses using AI?
Adversarial examples pose risks like inaccuracies, security vulnerabilities, and trust issues in AI-driven applications, emphasizing the need for robustness frameworks. Discover the Business Implications

6. How do adversarial examples affect model transferability?
Adversarial examples exploiting non-robust features are often transferable across models, while bugs are model-specific and less transferable. Understand Transferability

7. What methodologies are used to detect adversarial examples?
Techniques include anomaly detection, ensemble model comparison, and analyzing input deviations from known patterns. Explore Detection Methods

8. Are adversarial examples unique to certain datasets?
No, they are found universally in datasets where models exploit weak, non-robust features susceptible to manipulations. Learn about Universal Vulnerabilities

9. What steps can startups take to protect against adversarial attacks?
Startups can implement adversarial training, use ensembles, monitor inputs in real-time, and educate teams on adversarial risks. Explore Startup Strategies

10. Why are adversarial examples important for the AI industry?
They highlight the limitations of current models and shape advancements in robust AI development, critical for AI’s broader adoption and trust-building. Learn about Adversarial Impact

About the Author

Violetta Bonenkamp, also known as MeanCEO, is an experienced startup founder with an impressive educational background including an MBA and four other higher education degrees. She has over 20 years of work experience across multiple countries, including 5 years as a solopreneur and serial entrepreneur. Throughout her startup experience she has applied for multiple startup grants at the EU level, in the Netherlands and Malta, and her startups received quite a few of those. She’s been living, studying and working in many countries around the globe and her extensive multicultural experience has influenced her immensely.

Violetta Bonenkamp's expertise in CAD sector, IP protection and blockchain

Violetta Bonenkamp is recognized as a multidisciplinary expert with significant achievements in the CAD sector, intellectual property (IP) protection, and blockchain technology.

CAD Sector:

  • Violetta is the CEO and co-founder of CADChain, a deep tech startup focused on developing IP management software specifically for CAD (Computer-Aided Design) data. CADChain addresses the lack of industry standards for CAD data protection and sharing, using innovative technology to secure and manage design data.
  • She has led the company since its inception in 2018, overseeing R&D, PR, and business development, and driving the creation of products for platforms such as Autodesk Inventor, Blender, and SolidWorks.
  • Her leadership has been instrumental in scaling CADChain from a small team to a significant player in the deeptech space, with a diverse, international team.

IP Protection:

  • Violetta has built deep expertise in intellectual property, combining academic training with practical startup experience. She has taken specialized courses in IP from institutions like WIPO and the EU IPO.
  • She is known for sharing actionable strategies for startup IP protection, leveraging both legal and technological approaches, and has published guides and content on this topic for the entrepreneurial community.
  • Her work at CADChain directly addresses the need for robust IP protection in the engineering and design industries, integrating cybersecurity and compliance measures to safeguard digital assets.

Blockchain:

  • Violetta’s entry into the blockchain sector began with the founding of CADChain, which uses blockchain as a core technology for securing and managing CAD data.
  • She holds several certifications in blockchain and has participated in major hackathons and policy forums, such as the OECD Global Blockchain Policy Forum.
  • Her expertise extends to applying blockchain for IP management, ensuring data integrity, traceability, and secure sharing in the CAD industry.

Violetta is a true multiple specialist who has built expertise in Linguistics, Education, Business Management, Blockchain, Entrepreneurship, Intellectual Property, Game Design, AI, SEO, Digital Marketing, cyber security and zero code automations. Her extensive educational journey includes a Master of Arts in Linguistics and Education, an Advanced Master in Linguistics from Belgium (2006-2007), an MBA from Blekinge Institute of Technology in Sweden (2006-2008), and an Erasmus Mundus joint program European Master of Higher Education from universities in Norway, Finland, and Portugal (2009).

She is the founder of Fe/male Switch, a startup game that encourages women to enter STEM fields, and also leads CADChain, and multiple other projects like the Directory of 1,000 Startup Cities with a proprietary MeanCEO Index that ranks cities for female entrepreneurs. Violetta created the "gamepreneurship" methodology, which forms the scientific basis of her startup game. She also builds a lot of SEO tools for startups. Her achievements include being named one of the top 100 women in Europe by EU Startups in 2022 and being nominated for Impact Person of the year at the Dutch Blockchain Week. She is an author with Sifted and a speaker at different Universities. Recently she published a book on Startup Idea Validation the right way: from zero to first customers and beyond, launched a Directory of 1,500+ websites for startups to list themselves in order to gain traction and build backlinks and is building MELA AI to help local restaurants in Malta get more visibility online.

For the past several years Violetta has been living between the Netherlands and Malta, while also regularly traveling to different destinations around the globe, usually due to her entrepreneurial activities. This has led her to start writing about different locations and amenities from the POV of an entrepreneur. Here’s her recent article about the best hotels in Italy to work from.