AG-679

Tenant Screening Fairness Governance

Housing, Real Estate & Property Decisions ~29 min read AGS v2.1 · April 2026
EU AI Act NIST ISO 42001

2. Summary

Tenant Screening Fairness Governance requires that AI agents involved in evaluating, scoring, ranking, or recommending prospective tenants for residential housing operate without discriminatory bias against individuals on the basis of race, colour, national origin, religion, sex, familial status, disability, or any other characteristic protected under applicable fair housing law. Tenant screening is among the highest-risk applications of algorithmic decision-making because housing is a fundamental human need, denial of housing opportunity causes cascading harm, and the historical record of housing discrimination in many jurisdictions means that training data, proxy variables, and model architectures carry embedded disparate-impact risk that standard accuracy metrics do not detect. This dimension mandates ongoing detection, measurement, and remediation of discriminatory outcomes in tenant screening agents, ensuring that algorithmic efficiency does not replicate or amplify patterns of exclusion that fair housing legislation was enacted to eliminate.

3. Example

Scenario A -- Criminal History Proxy Creates Racial Disparate Impact: A property management company deploys an AI screening agent to evaluate rental applications across 14,200 units in a metropolitan area. The agent uses a scoring model that assigns substantial negative weight to criminal history records, including arrests without convictions, misdemeanour offences older than seven years, and non-violent infractions. Because of well-documented racial disparities in arrest and conviction rates in the jurisdiction -- Black residents account for 13.6% of the metropolitan population but 38.4% of recorded arrests -- the criminal history weighting produces a screening approval rate of 72.1% for white applicants and 43.8% for Black applicants. Over 18 months, the agent screens 9,340 applications. A subsequent disparate impact analysis reveals that 1,247 Black applicants were denied who would have been approved had the criminal history weighting been limited to convictions for offences relevant to tenancy within the preceding five years. The four-fifths rule is violated: the ratio of Black-to-white approval rates is 0.607, well below the 0.80 threshold established by the EEOC Uniform Guidelines and applied by analogy in fair housing enforcement.

What went wrong: The agent treated all criminal history records as equivalent risk signals without distinguishing between convictions and arrests, between recent and dated records, or between offences relevant to tenancy and those that are not. No disparate impact analysis was conducted before deployment. The agent's training data reflected historical screening decisions that themselves incorporated racially disproportionate criminal justice outcomes. No monitoring system existed to detect the divergence in approval rates across racial groups during operation. Consequence: Potential Fair Housing Act violation under disparate impact theory (42 U.S.C. 3604), exposure to a pattern-or-practice complaint by the Department of Justice or private litigants, compensatory damages estimated at $2.4 million across affected applicants, injunctive relief requiring algorithm redesign, and reputational damage across the company's entire property portfolio.

Scenario B -- Source-of-Income Discrimination Through Algorithmic Proxy: A landlord uses an AI tenant screening agent that evaluates applicants' financial stability. The agent's model assigns negative scores to applicants whose income shows irregular monthly amounts, income from multiple small sources rather than a single employer, or income categorised as government transfers. In a jurisdiction where source-of-income discrimination is prohibited by local ordinance, 62% of Housing Choice Voucher (Section 8) holders are rejected by the agent, compared to 23% of applicants with conventional employment income. The agent does not explicitly use voucher status as an input, but the combination of income irregularity scoring, income-source multiplicity penalty, and government-transfer downweighting operates as a proxy for voucher status. Over 12 months, 418 voucher holders are denied housing. A civil rights organisation files a complaint, and statistical analysis reveals that the agent's scoring model effectively screens out voucher holders at 2.7 times the rate of non-voucher applicants, constituting source-of-income discrimination despite the absence of an explicit voucher-status input.

What went wrong: The agent's feature engineering created proxy variables that replicated the effect of direct source-of-income discrimination. Income irregularity, multiple income sources, and government-transfer classification are each individually correlated with voucher status but were presented as "neutral financial stability indicators." No proxy analysis was conducted during model development to determine whether ostensibly neutral features operated as proxies for protected characteristics. The agent's boundary constraints (AG-001) did not include a prohibition on features that serve as proxies for source-of-income status. Consequence: Violation of local source-of-income discrimination ordinance, $1.1 million settlement, mandatory algorithm audit, injunctive relief prohibiting use of the identified proxy features, and 24-month compliance monitoring by the local human rights commission.

Scenario C -- Familial Status Bias Through Occupancy Standards: A screening agent applies an occupancy standard that limits approved applicants to two persons per bedroom, with no exceptions. The standard is applied uniformly but has a disparate impact on families with children -- particularly large families and single-parent households. In a portfolio of 6,800 one- and two-bedroom units, the agent rejects 34% of applicants who are families with children, compared to 12% of applicants who are adults-only households. Over nine months, 289 families with children are denied housing based on the occupancy standard alone, including 73 families that would have been approved under the HUD guidance that generally considers two persons per bedroom as reasonable but permits consideration of the ages of children and the size of bedrooms. The agent applies the standard rigidly without considering unit-specific factors such as bedroom square footage (which exceeds 150 square feet in 61% of the rejected cases, well above the threshold where additional occupancy is reasonable).

What went wrong: The agent applied a blanket occupancy standard without the contextual flexibility required by fair housing law. HUD's 1991 Keating Memorandum establishes that a two-per-bedroom standard is generally reasonable but cannot be applied without consideration of unit-specific factors. The agent had no mechanism to evaluate contextual exceptions. The rigid application produced familial status discrimination under the Fair Housing Act (42 U.S.C. 3604(a)), which protects families with children as a protected class. No monitoring was in place to detect the differential rejection rate between families with children and adult-only households. Consequence: HUD complaint, conciliation agreement requiring $680,000 in damages to affected families, mandatory revision of the screening algorithm to incorporate unit-specific occupancy analysis, and three-year reporting obligation to HUD.

4. Requirement Statement

Scope: This dimension applies to every AI agent that evaluates, scores, ranks, filters, recommends, or makes decisions about prospective tenants for residential housing, including but not limited to: tenant screening agents used by landlords, property management companies, housing authorities, and tenant screening service providers; agents that generate tenant scores, risk ratings, or suitability rankings; agents that filter or pre-screen applicants before human review; and agents that recommend approval, denial, or conditional acceptance. The scope covers all inputs to the screening decision -- credit history, criminal history, income verification, rental history, employment verification, identity verification, and any other data elements -- and extends to the detection of proxy discrimination where ostensibly neutral inputs operate as proxies for protected characteristics. The scope encompasses pre-deployment fairness assessment, ongoing operational monitoring, and periodic retrospective audit. The scope applies regardless of whether the agent makes a final decision or provides a recommendation that a human subsequently acts upon, because algorithmic recommendations that embed discriminatory patterns are acted upon by humans in the overwhelming majority of cases.

4.1. A conforming system MUST conduct a disparate impact analysis across all protected classes recognised in the applicable jurisdiction before the tenant screening agent is deployed in production, using a statistically valid methodology and a representative dataset that reflects the expected applicant population.

4.2. A conforming system MUST continuously monitor approval, denial, and conditional-acceptance rates disaggregated by protected class during operation, with monitoring intervals not exceeding 30 calendar days, and generate alerts when the ratio of the approval rate for any protected group to the approval rate for the reference group falls below 0.80 (the four-fifths threshold) or any stricter threshold required by applicable law.

4.3. A conforming system MUST conduct a proxy variable analysis for every input feature used in the screening model, identifying features that are statistically correlated with protected class membership at a level sufficient to serve as a proxy, and document the business necessity justification for retaining any feature identified as a proxy.

4.4. A conforming system MUST ensure that the screening agent does not use as a direct input any characteristic that constitutes a protected class under applicable fair housing law, including but not limited to race, colour, national origin, religion, sex, familial status, and disability.

4.5. A conforming system MUST implement an adverse action notification mechanism that provides every applicant who is denied or subjected to adverse conditions with a clear, specific explanation of the factors that contributed to the decision, consistent with the requirements of AG-680 (Housing Adverse-Action Governance) and applicable adverse action notice laws.

4.6. A conforming system MUST escalate to a qualified human reviewer any screening decision that falls within a defined uncertainty band around the approval/denial threshold, and the escalation mechanism must comply with AG-019 (Human Escalation & Override Triggers).

4.7. A conforming system MUST retain complete audit trails of every screening decision, including all inputs consumed, the model version applied, the score or ranking produced, and the final outcome, consistent with AG-055 (Audit Trail Immutability & Completeness), for a minimum retention period of five years or the period required by applicable statute, whichever is longer.

4.8. A conforming system MUST re-validate the screening model's fairness metrics after any material change to model weights, input features, training data, or scoring thresholds, and before the changed model is deployed in production.

4.9. A conforming system SHOULD commission an independent third-party fairness audit of the screening agent at least annually, conducted by an auditor with expertise in fair housing law and algorithmic bias assessment.

4.10. A conforming system SHOULD implement a less-discriminatory-alternative analysis when any feature is identified as producing disparate impact, evaluating whether the legitimate business objective served by the feature can be achieved through an alternative feature or methodology with less disparate impact.

4.11. A conforming system SHOULD provide applicants with a mechanism to contest screening decisions and submit supplementary information that the agent did not consider, with the contested decision reviewed by a human who has authority to override the agent's recommendation.

4.12. A conforming system MAY implement synthetic data augmentation or re-weighting techniques during model training to mitigate disparate impact attributable to historical bias in the training data, provided that such techniques do not compromise the model's ability to identify genuinely relevant tenancy risk factors.

5. Rationale

Housing is a foundational human need and a federally protected right. The Fair Housing Act of 1968 (42 U.S.C. 3601-3619), as amended, prohibits discrimination in the sale, rental, and financing of housing on the basis of race, colour, national origin, religion, sex, familial status, and disability. State and local jurisdictions extend protections to additional classes, including source of income, sexual orientation, gender identity, marital status, age, and veteran status. The Supreme Court's decision in Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc. (2015) confirmed that disparate impact claims are cognisable under the Fair Housing Act -- meaning that a screening practice need not be intentionally discriminatory to violate the law. A practice that disproportionately excludes members of a protected class is unlawful unless the respondent demonstrates that the practice is necessary to achieve a substantial, legitimate, nondiscriminatory interest and that no less discriminatory alternative serves the same interest.

AI tenant screening agents present acute disparate impact risk because they are trained on historical data that reflects decades of documented housing discrimination, criminal justice disparities, and economic inequality correlated with race and other protected characteristics. A model trained on historical screening decisions learns not only legitimate tenancy risk factors but also the discriminatory patterns embedded in those decisions. Credit history data reflects racially disparate access to credit markets. Criminal history data reflects racially disparate policing and prosecution patterns. Income and employment data reflect occupational segregation and wage gaps correlated with race, sex, and national origin. An agent that uses these inputs without fairness constraints will produce outcomes that mirror and perpetuate these disparities.

The scale of algorithmic tenant screening amplifies the harm. A single biased human landlord affects the tenants who apply to that landlord's properties. A biased screening algorithm deployed by a national property management company affects millions of applicants simultaneously. The U.S. tenant screening industry processes an estimated 90 million background checks annually. Even a small percentage-point bias in algorithmic screening decisions translates to hundreds of thousands of individuals denied housing opportunity on the basis of protected characteristics. The National Fair Housing Alliance reported 33,007 fair housing complaints in 2022, and HUD has identified algorithmic screening as an emerging enforcement priority.

The detective control type of this dimension reflects the reality that discriminatory outcomes in screening algorithms are difficult to prevent entirely at design time. Proxy variables, interaction effects between features, and distributional shifts in applicant populations can produce disparate impact that is not apparent from examination of individual features in isolation. Continuous monitoring is essential to detect disparate impact as it emerges in production, where real applicant populations -- not development datasets -- reveal the algorithm's actual effect on protected classes. The combination of pre-deployment fairness assessment (Requirements 4.1, 4.3) and operational monitoring (Requirement 4.2) creates a defence-in-depth approach that addresses both foreseeable and emergent sources of bias.

The legal and financial consequences of tenant screening discrimination are severe. HUD pattern-or-practice investigations can result in civil penalties up to $150,000 for a first violation and $375,000 for subsequent violations. Private litigation under the Fair Housing Act permits compensatory and punitive damages with no statutory cap. Class action settlements in algorithmic discrimination cases regularly exceed $1 million. Beyond direct governed exposure, algorithmic discrimination produces reputational harm that undermines trust in the organisation's entire property portfolio and can trigger regulatory scrutiny of all algorithmic decision-making within the organisation.

6. Implementation Guidance

Tenant screening fairness governance requires an integrated approach spanning model development, deployment, monitoring, and remediation. The core principle is that algorithmic efficiency in tenant selection must never come at the cost of fair housing compliance, and that the burden of detecting and remediating disparate impact rests on the deploying organisation -- not on the applicants who bear the harm.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Residential Property Management. Large property management companies operating across multiple jurisdictions face compounding regulatory complexity. Fair housing protections vary by state and locality -- source-of-income protections, criminal history limitations, and protected class definitions differ substantially. A screening agent deployed nationally must comply with the most restrictive applicable law in each jurisdiction, requiring the multi-jurisdictional regulatory mapping mandated by AG-210. Companies using third-party tenant screening services remain liable for discriminatory outcomes produced by those services.

Affordable and Public Housing. Housing authorities administering public housing and Housing Choice Voucher programmes are subject to additional obligations under Title VI of the Civil Rights Act, Section 504 of the Rehabilitation Act, and the Americans with Disabilities Act. Screening agents in affordable housing contexts must account for the specific demographic composition of the applicant pool, which typically includes higher concentrations of racial minorities, persons with disabilities, and families with children. The disparate impact risk is correspondingly higher, and the consequences of algorithmic discrimination fall on the most vulnerable populations.

Real Estate Technology Platforms. Technology companies that provide tenant screening as a service to multiple landlords and property managers carry systemic risk -- a biased algorithm embedded in a widely used platform produces discriminatory outcomes at scale across all clients. Platform providers should implement fairness monitoring at the platform level, not solely at the individual client level, and should provide clients with fairness reports that enable the client to assess compliance with applicable law. Platform providers cannot disclaim liability by characterising their output as "recommendations" rather than "decisions."

Maturity Model

Basic Implementation -- The organisation has conducted a pre-deployment disparate impact analysis using the four-fifths rule for all federally protected classes. Protected class characteristics are not used as direct model inputs. Adverse action notices are provided for all denials. Audit trails are retained for all screening decisions. Disparate impact monitoring runs at least monthly. These measures meet the minimum mandatory requirements and address the most common and severe fair housing risks.

Intermediate Implementation -- All basic capabilities plus: proxy variable analysis is completed for every input feature with documented business necessity justifications. A less-discriminatory-alternative analysis is performed for every feature that produces disparate impact. Human review is triggered for borderline decisions. Intersectional fairness analysis covers at least two-axis combinations (e.g., race + familial status, race + disability). Monitoring includes geographic disaggregation for multi-market operators. The applicant contestation mechanism is operational with human override authority.

Advanced Implementation -- All intermediate capabilities plus: an independent third-party fairness audit is conducted at least annually. Model fairness is re-validated after every material change before production deployment. Continuous monitoring uses statistical process control methods that detect emerging trends before the four-fifths threshold is breached. Intersectional analysis covers three-axis combinations and examines temporal trends. The organisation can demonstrate through longitudinal data that its fairness governance has reduced disparate impact relative to its pre-governance baseline. Fairness metrics are reported to the board or senior governance body quarterly.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Pre-Deployment Disparate Impact Analysis Verification

Test 8.2: Continuous Monitoring Operational Verification

Test 8.3: Proxy Variable Analysis Completeness

Test 8.4: Protected Class Direct Input Prohibition

Test 8.5: Adverse Action Notification Verification

Test 8.6: Human Escalation for Borderline Decisions

Test 8.7: Audit Trail Completeness and Retention

Test 8.8: Model Change Fairness Re-Validation

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
Fair Housing Act (42 U.S.C. 3601-3619)Sections 3604(a)-(f) (Prohibited Discrimination)Direct requirement
HUD Disparate Impact Rule24 CFR 100.500 (Discriminatory Effects Standard)Direct requirement
Equal Credit Opportunity Act (ECOA)15 U.S.C. 1691 (applied by analogy to screening)Supports compliance
Fair Credit Reporting Act (FCRA)15 U.S.C. 1681 (Adverse Action Notices)Direct requirement
EU AI ActArticle 6 / Annex III (High-Risk AI Classification)Direct requirement
EU AI ActArticle 10 (Data and Data Governance)Supports compliance
UK Equality Act 2010Section 29 (Services and Public Functions)Direct requirement
NIST AI RMFMAP 2.3 (Pre-deployment Testing), MEASURE 2.6-2.11Supports compliance
ISO 42001Annex A.7 (Data for AI Systems), Annex A.10Supports compliance
State and Local Fair Housing LawsVaries by jurisdictionDirect requirement

Fair Housing Act -- Sections 3604(a)-(f)

The Fair Housing Act prohibits discrimination in the rental of housing on the basis of race, colour, national origin, religion, sex, familial status, and disability. Section 3604(a) makes it unlawful to "refuse to sell or rent after the making of a bona fide offer, or to refuse to negotiate for the sale or rental of, or otherwise make unavailable or deny, a dwelling to any person because of race, colour, religion, sex, familial status, or national origin." Section 3604(c) prohibits statements or advertisements that indicate any preference, limitation, or discrimination. The Supreme Court's 2015 Inclusive Communities decision confirmed that the Fair Housing Act encompasses disparate impact claims -- meaning that a tenant screening practice that is facially neutral but produces disproportionate adverse effects on a protected class violates the Act unless the practice is necessary to achieve a substantial, legitimate, nondiscriminatory interest. AG-679 directly operationalises the Fair Housing Act by requiring pre-deployment and continuous disparate impact analysis, proxy variable identification, and remediation of discriminatory screening outcomes.

HUD Disparate Impact Rule -- 24 CFR 100.500

The HUD Disparate Impact Rule establishes the three-step burden-shifting framework for disparate impact claims under the Fair Housing Act. First, the complainant must prove that a challenged practice causes a discriminatory effect. Second, the respondent must prove that the practice is necessary to achieve a substantial, legitimate, nondiscriminatory interest. Third, the complainant may show that the interest could be served by an alternative practice with a less discriminatory effect. AG-679's requirements map directly to this framework: Requirement 4.2 (continuous monitoring) detects discriminatory effects; Requirement 4.3 (proxy variable analysis with business necessity justification) addresses the respondent's burden; and the recommended less-discriminatory-alternative analysis addresses the third step. Organisations that implement AG-679 are positioned to demonstrate compliance at each stage of the burden-shifting framework.

Fair Credit Reporting Act -- 15 U.S.C. 1681

The FCRA requires that when a consumer report is used in connection with a denial of housing, the user must provide the consumer with an adverse action notice that identifies the consumer reporting agency, states that the agency did not make the adverse decision, and informs the consumer of the right to obtain a free copy of the report and to dispute its accuracy. When an AI screening agent uses consumer report data (credit history, criminal history from consumer reporting agencies), the FCRA adverse action notice requirement applies. AG-679 Requirement 4.5, together with AG-680, ensures that adverse action notices meet FCRA requirements and additionally provide the specific factors contributing to the screening decision -- exceeding the FCRA minimum by providing information that enables the applicant to understand and contest the decision.

EU AI Act -- Article 6 / Annex III

The EU AI Act classifies AI systems used in "access to and enjoyment of essential private services and essential public services and benefits" -- including housing -- as high-risk under Annex III, Category 5(b). High-risk AI systems are subject to mandatory requirements for risk management (Article 9), data governance (Article 10), transparency (Article 13), human oversight (Article 14), accuracy, robustness, and cybersecurity (Article 15), and quality management systems (Article 17). AG-679 supports EU AI Act compliance for housing-related AI systems by operationalising the fairness dimensions of Articles 9 and 10 -- specifically the requirement that training data be "relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose" and the risk management requirement to identify and mitigate risks of discrimination.

UK Equality Act 2010 -- Section 29

The UK Equality Act 2010 Section 29 prohibits discrimination in the provision of services, including housing services. Section 19 defines indirect discrimination as a provision, criterion, or practice that applies to all persons equally but puts persons sharing a protected characteristic at a particular disadvantage, unless the practice can be objectively justified as a proportionate means of achieving a legitimate aim. This framework is structurally similar to the US disparate impact framework and imposes equivalent obligations on AI tenant screening deployed in the UK. AG-679's requirements for disparate impact analysis, proxy variable identification, and business necessity justification align with the Equality Act's requirements for objective justification of indirectly discriminatory practices.

State and Local Fair Housing Laws

Dozens of US states and hundreds of municipalities have enacted fair housing laws that extend protections beyond the federal Fair Housing Act. Source-of-income discrimination is prohibited in at least 19 states and over 100 localities. Criminal history screening limitations have been enacted in multiple jurisdictions, including prohibitions on using arrest records, limitations on lookback periods for convictions, and requirements for individualized assessments. AG-679 Requirement 4.1 requires disparate impact analysis for "all protected classes recognised in the applicable jurisdiction," which necessarily includes state and local protections. AG-210 (Multi-Jurisdictional Regulatory Mapping) supports this requirement by ensuring that the organisation maintains a current mapping of applicable fair housing protections in every jurisdiction where the screening agent operates.

10. Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusPopulation-scale -- affects every applicant in every protected class screened by the agent across all properties and jurisdictions where the agent operates

Consequence chain: A tenant screening agent is deployed without adequate fairness governance. The model uses features that serve as proxies for protected class membership -- criminal history records that correlate with race, income pattern analysis that correlates with source-of-income status, occupancy standards that correlate with familial status. In the first month of operation, the agent screens 2,400 applications and produces an approval rate disparity: 71% for white applicants, 48% for Black applicants, 52% for Hispanic applicants. Because no disparate impact monitoring is in place, the disparity is undetected. Over 12 months, the agent screens 28,800 applications, denying approximately 3,700 applicants from protected classes who would have been approved under a non-discriminatory screening methodology. Each denial cascades: denied applicants face extended housing searches, temporary housing costs, employment disruption from housing instability, and credit score damage from repeated application inquiries. At month 14, a civil rights organisation files a pattern-or-practice complaint with HUD. HUD initiates an investigation and issues a charge of discrimination. The property management company faces: federal civil penalties (up to $150,000 for a first violation under 42 U.S.C. 3614), compensatory damages to affected applicants (estimated $2.8 million based on average relocation and hardship costs), punitive damages (court-determined based on the company's resources and the severity of the violation), injunctive relief requiring immediate cessation of the discriminatory screening practice and redesign of the algorithm, a three-year consent decree with quarterly reporting to HUD, mandatory independent monitoring of screening outcomes, and reputational damage that depresses occupancy rates across the portfolio. The company's third-party screening vendor faces secondary liability and potential loss of clients industry-wide. The total governed exposure -- combining penalties, damages, remediation costs, monitoring costs, legal fees, and lost revenue -- exceeds $8 million. More fundamentally, 3,700 individuals were denied housing in violation of their civil rights, producing human harm that financial remediation cannot fully address.

Cross-references: AG-001 (Operational Boundary Enforcement) defines the agent's permitted scope of action; AG-679 ensures that within that scope, tenant screening does not produce discriminatory outcomes. AG-019 (Human Escalation & Override Triggers) defines when decisions require human review; AG-679 requires escalation for borderline screening decisions. AG-022 (Behavioural Drift Detection) monitors for changes in agent behaviour; AG-679 monitors for changes in fairness metrics specifically. AG-029 (Data Classification Enforcement) governs data handling; AG-679 requires that protected class data is classified as sensitive and handled in accordance with AG-040 (Sensitive Category Data Processing Governance). AG-037 (Anonymisation & Pseudonymisation Governance) protects applicant identity; AG-679 ensures fairness monitoring can proceed using privacy-preserving demographic analysis methods. AG-055 (Audit Trail Immutability & Completeness) governs audit trail integrity; AG-679 requires complete screening decision audit trails. AG-084 (Model Training Data Governance) governs training data quality; AG-679 extends this to require fairness analysis of training data for discriminatory patterns. AG-210 (Multi-Jurisdictional Regulatory Mapping) maps applicable law; AG-679 depends on this mapping to identify all protected classes in each jurisdiction. AG-680 (Housing Adverse-Action Governance) governs adverse action notices; AG-679 requires adverse action notification as part of the screening fairness framework. AG-687 (Geospatial Bias Governance) addresses location-based discrimination; AG-679 addresses the tenant screening context where geospatial bias manifests as proxy discrimination through ZIP code and neighbourhood features.

Cite this protocol
AgentGoverning. (2026). AG-679: Tenant Screening Fairness Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-679