Responsible AI Policy
Last Updated: April 2026
1. Our Position on AI
At Eveningside Labs, artificial intelligence is central to what we build. We develop AI-powered automation, SaaS platforms, fine-tuned models, intelligent bots, and data pipelines for businesses across industries and geographies. We believe that the power of AI must be matched by an equal commitment to its responsible development and deployment.
This Responsible AI Policy articulates the principles, safeguards, and governance practices we apply to every AI-related engagement. It is a living document that evolves alongside the technology, regulation, and societal understanding of AI’s impact.
2. Core Principles
2.1 Transparency
We believe people have a right to know when they are interacting with an AI system and to understand, in plain language, how that system influences the outcomes they experience. In practice, this means:
- AI disclosure. We design all client-facing AI systems with clear, conspicuous disclosure that an AI model is involved in generating content, making recommendations, or driving automation. Where a chatbot, virtual agent, or automated email system is used, the system identifies itself as AI-powered.
- Explainability. Where technically feasible and proportionate to the risk level, we implement mechanisms that allow users and stakeholders to understand the key factors behind an AI-generated output, recommendation, or decision. For high-risk systems, we document model architecture, training data sources, intended use, and known limitations.
- Documentation.Every AI engagement includes delivery of a model card or system summary that describes the model’s capabilities, limitations, evaluation metrics, and appropriate use cases.
2.2 Fairness & Non-Discrimination
AI systems must not perpetuate or amplify unfair biases. We apply the following practices:
- Bias testing. We evaluate models for demographic bias across protected characteristics (including race, gender, age, religion, disability, and sexual orientation) using appropriate fairness metrics before deployment and on a periodic basis thereafter.
- Training data review. We assess training and fine-tuning datasets for representation gaps, label bias, and historical discrimination. Where bias is detected, we apply mitigation strategies such as data augmentation, re-sampling, or post-processing calibration.
- Continuous monitoring. For production systems, we implement monitoring pipelines that track model drift, output distribution shifts, and fairness metrics over time, triggering alerts when thresholds are breached.
2.3 Safety & Reliability
AI systems must operate safely and predictably within their intended scope:
- Scope boundaries. We define explicit boundaries for what each AI system is designed to do and implement guardrails to prevent operation outside those boundaries, including input validation, output filtering, and fallback mechanisms.
- Human-in-the-loop. For systems that make or influence consequential decisions (e.g., financial, medical, legal, or employment-related), we design workflows that require meaningful human review before final action is taken.
- Adversarial testing. We conduct red-team exercises and adversarial testing to identify vulnerabilities, including prompt injection, data poisoning, and model inversion attacks.
- Incident response. We maintain an AI incident response process. If an AI system produces harmful, inaccurate, or unintended outputs in production, we have procedures to quickly disable, correct, or roll back the system.
2.4 Privacy & Data Protection
AI development must respect individual privacy:
- Data minimisation. We collect and process only the data that is necessary for the specific AI task. Where possible, we use anonymised or synthetic data for model development and testing.
- Purpose limitation. Data provided for one engagement is not repurposed for training models for other clients or for our own commercial AI products without explicit written consent.
- Privacy-preserving techniques. We employ techniques such as differential privacy, data pseudonymisation, and federated learning where appropriate to reduce the risk of re-identification.
- Compliance. All AI-related data processing is conducted in compliance with our Privacy Policy and applicable data protection laws, including the GDPR, CCPA/CPRA, and India’s DPDPA.
3. Prohibited AI Uses
Eveningside Labs will not develop, deploy, or knowingly facilitate AI systems for the following purposes:
- Autonomous lethal weapons systems or components thereof.
- Social scoring systems that evaluate individuals based on social behaviour or predicted personal characteristics.
- Real-time biometric identification in publicly accessible spaces for law enforcement purposes (except where expressly permitted by law and subject to judicial authorisation).
- Generation of non-consensual intimate imagery (NCII) or child sexual abuse material (CSAM).
- Manipulation of individuals through subliminal techniques or exploitation of vulnerabilities (age, disability, economic situation) in ways likely to cause significant harm.
- Predatory targeting of vulnerable populations, including personalised pricing designed to exploit economic hardship.
- Any application that violates our Acceptable Use Policy.
4. EU AI Act Readiness
The European Union’s Artificial Intelligence Act (Regulation (EU) 2024/1689) establishes a comprehensive, risk-based framework for AI regulation. Although Eveningside Labs is headquartered in India, many of our clients operate within the EU or deploy systems that affect EU residents. We therefore proactively align our practices with the AI Act’s requirements:
- Risk classification.For every AI engagement, we assess the system against the AI Act’s risk tiers (unacceptable, high, limited, minimal) and apply commensurate safeguards.
- High-risk system compliance. Where a system qualifies as high-risk (e.g., AI used in employment decisions, credit scoring, education assessment), we implement mandatory requirements including risk management systems, data governance practices, technical documentation, logging, human oversight mechanisms, and accuracy/robustness testing.
- Transparency obligations. For systems that interact directly with individuals (chatbots, recommendation engines), we ensure that users are informed they are interacting with AI and that AI-generated content (e.g., deepfakes, synthetic text) is clearly labelled as such.
- General-purpose AI models.When we integrate or fine-tune general-purpose AI models (such as large language models), we document training methodologies, evaluation results, and known limitations in line with the Act’s requirements for general-purpose AI providers.
- Record keeping. We maintain documentation sufficient to demonstrate compliance with the AI Act upon request by clients or their EU-based market surveillance authorities.
5. Governance & Accountability
Responsible AI is not delegated to a single team — it is an organisational commitment:
- Leadership accountability. Our founding team is directly accountable for the ethical implications of our AI work and reviews this policy at least annually.
- Pre-engagement review. Before accepting any AI engagement, we conduct an ethical use assessment to ensure the proposed use case aligns with our principles and this Policy.
- Training. All team members involved in AI development receive training on responsible AI principles, regulatory requirements, and bias awareness.
- Client collaboration. We work with clients to ensure they understand the capabilities, limitations, and responsible-use requirements of the AI systems we build for them.
6. Reporting Concerns
If you have concerns about the ethical implications of an AI system built by Eveningside Labs, or if you believe one of our systems is causing harm, we encourage you to contact us. We take all reports seriously and will investigate promptly.
7. Contact
For questions about this Responsible AI Policy, please contact:
Eveningside Labs
Ahmedabad, Gujarat 380015, India
Email: ai-ethics@eveningsidelabs.com