Select the language for this session. You can change it anytime from the bar above.
AI systems screen job applications, assess visas, score credit, and price insurance. The EU AI Act, GDPR Article 22, Canada's Treasury Board Directive, and equivalent laws give you specific rights. AEQUITA explains them — and gives you the formal request tools to exercise them yourself.
Generate a formal request for an explanation of any AI decision — under GDPR Art. 22 or EU AI Act. AEQUITA drafts it. You send it.
In every one of these areas, you have specific rights. Most people have never been told.
Log every AI decision made about you. Build a timestamped evidence record. Generate a formal bias pattern report you can file with a regulator.
PROVA builds your formal dispute file · TEMPORA tracks every deadline · CONSERVA permanently archives your key documents
Your AI rights are free to know. Always.
AEQUITA is the twelfth platform in the PLENA suite — an AI rights and algorithmic accountability platform. It explains the rights that exist in law when AI systems make decisions about your employment, credit, immigration status, housing, or insurance — and how to use those rights.
Hiring AI that screens out international names. Credit algorithms that penalise frequent address changes. Insurance pricing that disadvantages immigrant postcodes. Visa processing systems that risk-score by nationality. These automated decisions shape the lives of newcomers and diaspora professionals — often invisibly, always without explanation. AEQUITA changes that.
FORTIA explains your rights as a worker, tenant, and citizen under existing law. AEQUITA explains the new category of rights that emerged with AI legislation — specifically the rights that apply when a machine, rather than a human, makes decisions about you. Together they cover the full spectrum of rights that PLENA's audience needs.
Employers deploying AI in hiring, financial institutions using algorithmic credit scoring, and government agencies using automated decision-making all face increasing regulatory pressure to demonstrate compliance with AI rights legislation. AEQUITA's institutional tier helps compliance teams, HR departments, and legal firms understand their obligations — and build the human review mechanisms that law now requires.
AEQUITA is part of PLENA, created by Jean Claude Havyarimana. hvyjea0012@protonmail.com
TEMPORA extracts deadlines from official documents and keeps a permanent evidence trail.
Insurance companies use AI to automatically approve or deny requests for medical treatments, scans, and medications. In the US: the No Surprises Act and CMS rules require human review of AI denials. In the EU: AI prior authorisation systems are classified high-risk under the EU AI Act — you have the right to an explanation and human review. What to do: request in writing that a licensed physician review any AI-issued denial before you accept it.
Hospitals increasingly use AI to assist with diagnosis and triage. In Texas (USA): SB 1188, effective September 2025, requires practitioners to maintain human oversight of AI-generated medical decisions and disclose AI use to patients. In the EU: diagnostic AI in the highest-risk tier requires pre-deployment assessment, transparency, and human oversight. Your right globally: ask your provider directly — "Was an AI system used in my diagnosis or treatment recommendation?" They are increasingly required to tell you.
Insurance algorithms may use health data — including inferred data from app usage, shopping behaviour, or wearables — to set premiums. Under GDPR (EU/UK): you have the right to know what data was used and to challenge inaccurate data. Under HIPAA (US): your health data has specific protections but algorithmic use of inferred health data is less clearly covered. Under POPIA (South Africa): special category health data requires explicit consent for processing in insurance scoring.
If you are in the EU or accessible by EU users: platforms with over 45 million monthly EU users ("Very Large Online Platforms" — Meta, YouTube, TikTok, X, LinkedIn, Pinterest, Snapchat) must give you: the right to receive an explanation of why content was removed or restricted; the right to appeal moderation decisions to a human reviewer; the right to use a DSA-certified out-of-court dispute settlement body; and the right to a non-personalised (non-algorithmic) feed on recommendation-based services.
Platforms may reduce the visibility of your content without removing it or notifying you. Under DSA Article 27: VLOPs must explain the main parameters of their recommender systems in plain language. Under Article 28: you have the right to opt out of personalised recommendations. What to request: a transparency report on your account explaining any reach restrictions applied, what triggered them, and how to appeal.
The UK Online Safety Act (in force 2024) requires platforms to publish content moderation policies clearly and provide appeals for content removal. It is less detailed than DSA on algorithmic rights, but data rights under UK GDPR still apply to automated content decisions that affect you. Contact the ICO if a platform used automated processing of your personal data without adequate transparency.
A Subject Access Request compels an organisation to provide all personal data they hold about you — including any data used in automated decisions. This is the foundation of every AI rights challenge.
Under GDPR Article 22 (EU and UK), you have the right to request that a solely automated decision be reviewed by a human being. This letter formally invokes that right.
Under the EU AI Act (Article 86) and GDPR Article 22, you have the right to a meaningful explanation of how an automated decision was reached — what data was used, what weighting was applied, and why the decision went the way it did.
If an organisation fails to respond to your SAR or challenge letter, the next step is a formal complaint to your Data Protection Authority. This template is ready to send directly to your national DPA.
EEOC charges must be filed within 180 days of the discriminatory act (300 days in states with equivalent agencies). This template prepares the narrative for your EEOC charge submission at eeoc.gov/filing-charge-discrimination
The ICO (Information Commissioner's Office) handles UK GDPR complaints. File at ico.org.uk/make-a-complaint. This template prepares your complaint narrative.
NYC Local Law 144 requires employers in New York City to publish the results of independent bias audits for any AI hiring tools — and to notify candidates that AI was used. This letter formally requests disclosure of audit results and notification compliance.
Illinois's Artificial Intelligence Video Interview Act requires employers to notify candidates before using AI to evaluate video interviews, explain how the AI works, and obtain consent. Non-compliance is actionable under the Illinois Human Rights Act.
Your case — stripped of identifying details — contributes to pattern detection. If 40 people report the same employer, AEQUITA can surface that to regulators and journalists.
PROVA creates a formally structured dispute package — timestamped evidence log, complaint letters, escalation guide — that regulators and tribunals can act on.