Artificial intelligence (AI) is evolving rapidly, influencing industries in ways we couldn’t have imagined a decade ago. However, ensuring that AI operates safely and aligns with human values is a more complex problem than it may initially seem. This isn’t just an engineering challenge. It’s a dilemma that crosses into psychology, sociology, anthropology, and ethics, all fields traditionally dominated by the social sciences.
From my perspective as a serial entrepreneur building AI-based solutions, the sheer intensity of focus AI developers place on technology, while often sidelining human context, is troubling. Understanding how AI interacts with human behavior, culture, and decision-making cannot be achieved through algorithms alone. Here is where social scientists become indispensable.
Why AI Safety Demands Social Science Expertise
AI developers focus primarily on machine performance, training algorithms to maximize accuracy, efficiency, and learning capacity. But performance doesn’t guarantee safety. If you don’t account for human biases, cultural differences, or unpredictable societal impacts, even the smartest algorithms can fail disastrously.
Take facial recognition technology as an example. Research highlighted significant racial biases in the algorithms used in commercial applications. These biases didn’t come from malicious coding but from flawed datasets created without cultural and demographic diversity in mind. Cultural nuances or systemic biases embedded in historical data points often go unnoticed by developers but are glaring to sociologists and psychologists.
Social scientists are also vital in regulatory development. Policymakers usually lack hands-on technical expertise and fail to predict how AI could amplify existing inequalities or create unintended consequences. Social scientists can bridge this gap, helping translate technical risks into regulatory frameworks that safeguard public interest.
Investors love to talk about "disruption" , but disruption without safety measures often leads to scandals and lawsuits. I’ve seen startups undercut their own success by ignoring repercussions, from damaging public perception to regulatory crackdowns. With global AI investment expected to surpass $300 billion by 2026, as per Precedence Research, this risk grows exponentially.
Where Social Science Adds Value
Here’s where professionals in sociology, psychology, and allied fields genuinely elevate AI safety:
-
Ethical Frameworks
Social scientists help design ethical boundaries for AI systems. For example, anthropological data ensures that AI respects cultural privacy standards, avoiding misuse in sensitive areas like medical diagnosis or immigration control. -
Bias Identification and Mitigation
Psychologists and behavioral scientists can detect cognitive biases in data collection and interpretation, areas that often blindside developers. Tools like Google’s “What-If Tool” for bias testing benefit when analyzed through this lens. -
Effective Communication Between AI and Humans
Converting sophisticated AI-generated insights into understandable messaging for humans is harder than it sounds. Cognitive psychologists study how humans perceive information and make decisions, helping refine AI interfaces and speech systems. -
Scenario Testing Beyond Lab Environments
Using ethnographic methods, social scientists simulate how AI will perform in messy, real-world environments, far removed from ideal lab conditions. This type of testing is critical in sectors like urban planning, where AI must navigate unpredictable human behavior.
How Entrepreneurs Can Integrate Social Scientists into Their AI Projects
Early adoption is key. Waiting until post-launch to think about ethical or societal risks puts both your brand and users in jeopardy. Here’s a roadmap to start integrating this interdisciplinary expertise:
-
Hire Interdisciplinary Researchers Early
Don’t treat social scientists as post-production consultants, make them part of the development lifecycle. Collaboration during the data-gathering and pilot-testing phases will reveal risks engineers won’t anticipate. -
Build Diverse Teams
A homogenous development team risks incubating unexamined biases. Include project members with varied educational and cultural backgrounds, especially from the countries where your AI might deploy. -
Invest in Sociotechnical Audits
Conduct regular audits of your systems with social scientists. For example, ethnographers can assess how emerging products behave in complex social environments, such as disaster relief zones or crowded urban centers. Useful guides on sociotechnical evaluations are available from groups like Data & Society (read here about sociotechnical expertise).
Mistakes Startup Founders Should Avoid
-
Overlooking Local Context
Global deployments fail when startups ignore regional nuances. Consider chatbots designed in English, many fail when their training data is applied to different languages or cultural interpretations. Ensure localized insights are integrated. -
Sole Reliance on Engineers
Underestimating the role of behavioral studies leads to one-dimensional designs. AI’s job is to engage humans effectively, which necessitates empathy, an area where social sciences can guide developers. -
Using Static Databases
Assuming raw data is neutral is a rookie mistake. All data has biases, from what’s included to what’s omitted. Social scientists are invaluable in identifying and addressing these flaws during data preparation. -
Neglecting Human-Centric Evaluations
Your product may perform flawlessly under a technical stress test. That doesn’t mean it will perform well when interacting with real people under stressful, ambiguous conditions.
Closing Thoughts
As a founder deeply engaged with AI applications, I’ve learned firsthand that success doesn’t hinge solely on cutting-edge algorithms. Your technology matters, but your understanding of people matters more. AI safety isn’t something you “add on” later. It requires front-loading human insight into the design and deployment phases.
If you’re a startup founder diving into the AI space, consider this your most important call to action: integrate social scientists from day one. It will save you costly mistakes, ethical missteps, and reputational hits down the road.
AI may be a technology, but its applications, and implications, are profoundly human. Let’s treat them that way.
FAQ
1. Why does AI safety require input from social scientists?
AI safety relies on understanding human values, biases, and decision-making processes, which are areas of expertise for social scientists. These disciplines help address societal impacts, ethical considerations, and human-centric risks that machine learning alone cannot solve. Discover sociotechnical approaches to AI safety
2. How can social scientists help mitigate bias in AI?
Psychologists and sociologists can identify and address biases in AI datasets, ensuring algorithms don't reinforce existing inequalities. This approach was used in evaluating racial biases in facial recognition technologies. Explore bias mitigation strategies
3. What role do social scientists play in creating regulatory frameworks for AI?
Social scientists help translate technical risks into regulations that prioritize public interest while addressing societal consequences of AI implementation. They can analyze legal, social, and cultural contexts to design more informed policies. Learn about applying sociotechnical expertise
4. How does interdisciplinary collaboration benefit AI development?
Interdisciplinary teams combining social sciences and machine learning build systems that better understand human behavior, cultural influences, and ethical concerns, creating more reliable and human-centered AI. Check out insights on sociotechnical governance
5. Why is understanding human decision-making crucial for AI?
AI systems interact with humans, often under uncertain conditions. Social scientists study cognitive biases and decision-making patterns, helping refine AI interactions to be more effective and intuitive. Learn about decision-making research
6. What methods can social scientists use to test AI safety?
Ethnographic studies, sociological audits, and behavioral analysis in real-world environments help evaluate the societal implications of AI systems far beyond lab conditions. Explore real-world testing methods
7. What mistakes should startups avoid in AI projects?
Startups often neglect social complexities, rely solely on engineers, overlook biases in datasets, and fail to include human-centric evaluations. Integrating social science expertise prevents missteps in deployment. Discover common AI deployment errors
8. How can social scientists improve communication between AI and humans?
By studying how humans perceive and process information, cognitive psychologists optimize AI interfaces and messaging systems to be more user-friendly and culturally sensitive. Learn about effective AI communication
9. Why is scenario testing beyond lab conditions important?
Social scientists test AI systems in unpredictable, real-world environments to evaluate their performance in complex societal contexts, such as disaster relief or urban planning. Check out scenario testing methods
10. How should AI projects integrate social scientists throughout the development cycle?
Early collaboration with social scientists during data collection, development, and pilot-testing phases helps anticipate societal risks and improve safety outcomes. Discover interdisciplinary team strategies
About the Author
Violetta Bonenkamp, also known as MeanCEO, is an experienced startup founder with an impressive educational background including an MBA and four other higher education degrees. She has over 20 years of work experience across multiple countries, including 5 years as a solopreneur and serial entrepreneur. Throughout her startup experience she has applied for multiple startup grants at the EU level, in the Netherlands and Malta, and her startups received quite a few of those. She’s been living, studying and working in many countries around the globe and her extensive multicultural experience has influenced her immensely.
Violetta Bonenkamp's expertise in CAD sector, IP protection and blockchain
Violetta Bonenkamp is recognized as a multidisciplinary expert with significant achievements in the CAD sector, intellectual property (IP) protection, and blockchain technology.
CAD Sector:
- Violetta is the CEO and co-founder of CADChain, a deep tech startup focused on developing IP management software specifically for CAD (Computer-Aided Design) data. CADChain addresses the lack of industry standards for CAD data protection and sharing, using innovative technology to secure and manage design data.
- She has led the company since its inception in 2018, overseeing R&D, PR, and business development, and driving the creation of products for platforms such as Autodesk Inventor, Blender, and SolidWorks.
- Her leadership has been instrumental in scaling CADChain from a small team to a significant player in the deeptech space, with a diverse, international team.
IP Protection:
- Violetta has built deep expertise in intellectual property, combining academic training with practical startup experience. She has taken specialized courses in IP from institutions like WIPO and the EU IPO.
- She is known for sharing actionable strategies for startup IP protection, leveraging both legal and technological approaches, and has published guides and content on this topic for the entrepreneurial community.
- Her work at CADChain directly addresses the need for robust IP protection in the engineering and design industries, integrating cybersecurity and compliance measures to safeguard digital assets.
Blockchain:
- Violetta’s entry into the blockchain sector began with the founding of CADChain, which uses blockchain as a core technology for securing and managing CAD data.
- She holds several certifications in blockchain and has participated in major hackathons and policy forums, such as the OECD Global Blockchain Policy Forum.
- Her expertise extends to applying blockchain for IP management, ensuring data integrity, traceability, and secure sharing in the CAD industry.
Violetta is a true multiple specialist who has built expertise in Linguistics, Education, Business Management, Blockchain, Entrepreneurship, Intellectual Property, Game Design, AI, SEO, Digital Marketing, cyber security and zero code automations. Her extensive educational journey includes a Master of Arts in Linguistics and Education, an Advanced Master in Linguistics from Belgium (2006-2007), an MBA from Blekinge Institute of Technology in Sweden (2006-2008), and an Erasmus Mundus joint program European Master of Higher Education from universities in Norway, Finland, and Portugal (2009).
She is the founder of Fe/male Switch, a startup game that encourages women to enter STEM fields, and also leads CADChain, and multiple other projects like the Directory of 1,000 Startup Cities with a proprietary MeanCEO Index that ranks cities for female entrepreneurs. Violetta created the "gamepreneurship" methodology, which forms the scientific basis of her startup game. She also builds a lot of SEO tools for startups. Her achievements include being named one of the top 100 women in Europe by EU Startups in 2022 and being nominated for Impact Person of the year at the Dutch Blockchain Week. She is an author with Sifted and a speaker at different Universities. Recently she published a book on Startup Idea Validation the right way: from zero to first customers and beyond, launched a Directory of 1,500+ websites for startups to list themselves in order to gain traction and build backlinks and is building MELA AI to help local restaurants in Malta get more visibility online.
For the past several years Violetta has been living between the Netherlands and Malta, while also regularly traveling to different destinations around the globe, usually due to her entrepreneurial activities. This has led her to start writing about different locations and amenities from the POV of an entrepreneur. Here’s her recent article about the best hotels in Italy to work from.

