In the face of the proliferation of algorithmic decision-making in all facets of business, it was only a matter of time until the government got more involved in regulating or policing its use. Enter the Equal Employment Opportunity Commission (EEOC) with its first guidance on algorithmic decision making in employment and the discrimination that can result. In its guidance, structured in a Q & A format, the agency defines an algorithm as “a set of instructions that can be followed by a computer to accomplish some end.” Broadly written, the guidance is intended to cover a wide range of technologies specifically including software, algorithmic decision-making and artificial intelligence (AI). It cautions employers that when using one of these technologies, they may be unwittingly violating the American with Disabilities Act (ADA) and cautions that seemingly simple systems such as resume-screening software, hiring software, employee monitoring systems and the like may subject the employer to charges of violations of the ADA. It lists some simple functions by which algorithmic decision-making could violate ADA, including: failing to provide reasonable accommodations; screening out individuals with disabilities; or making a disability-related inquiry or seeking information that qualifies as a medical examination. And if that weren’t bad enough, EEOC recently further complicated successfully operating a business with its recent decision to officially recognize a third gender option! That’s right, in a recent bulleting dated June 27, 2022 the EEOC announced that individuals filing a complaint alleging discrimination would not be limited to male or female, but would have a nonbinary gender marker option in the EEOC charge intake process. That‘ s trouble waiting to happen.