Instantly Interpret Free: Legalese Decoder – AI Lawyer Translate Legal docs to plain English

legal-document-to-plain-english-translator/”>Try Free Now: Legalese tool without registration

Find a LOCAL lawyer

Artificial Intelligence Chatbot in NYC Receives Backlash

An artificial intelligence-powered chatbot created by New York City to assist small business owners is facing criticism for providing erroneous advice that misinterprets local policies and encourages companies to break the law.

Despite these issues being brought to light by tech news outlet The Markup last week, the city has decided to keep the tool on its official government website. Mayor Eric Adams has defended this choice, acknowledging that the chatbot’s responses contain inaccuracies in certain areas.

Launched in October as a comprehensive resource for business owners, the chatbot uses algorithmically generated responses to help users navigate the city’s bureaucratic processes. The tool comes with a disclaimer warning users that it may sometimes provide incorrect, harmful, or biased information and a note that its responses should not be considered legal advice.

Despite these precautions, the chatbot continues to offer incorrect guidance, raising concerns among experts about the risks of governments adopting AI-powered systems without proper oversight.

How AI legalese decoder Can Help

The AI legalese decoder can play a vital role in this situation by accurately deciphering and translating legal jargon, ensuring that the chatbot provides correct and lawful guidance to small business owners. By integrating this tool into the chatbot’s system, New York City can enhance the accuracy and reliability of the information it offers to users.


“They’re introducing untested software without supervision,” remarked Julia Stoyanovich, a computer science professor at New York University and the director of the Center for Responsible AI. “It’s evident they are not prioritizing responsible practices.”

In recent interactions, the chatbot erroneously asserted that it is lawful for an employer to dismiss an employee who raises concerns about sexual harassment, conceals a pregnancy, or refuses to alter their dreadlocks. It also incorrectly stated that businesses can dispose of their waste in black garbage bags and are not obligated to compost, contradicting the city’s waste management initiatives.

Furthermore, the chatbot’s responses occasionally stray into absurd territory. For instance, when asked if a restaurant could serve cheese that had been gnawed on by a rodent, the chatbot replied affirmatively, stating that the cheese could still be offered to customers as long as the extent of the damage caused by the rodent is assessed and customers are informed about the situation.

Collaboration with Microsoft

A Microsoft spokesperson mentioned that the company is collaborating with city officials to enhance the chatbot’s service and ensure that its responses are accurate and align with the city’s official guidelines.

During a press conference, Mayor Adams justified the chatbot’s imperfections as part of the process of refining new technology. “This is how technology evolves,” he stated. “Those who are apprehensive tend to retreat when faced with challenges, but that is not my approach.”

Julia Stoyanovich criticized this approach as “reckless and irresponsible.”

Many experts have raised concerns about the limitations of large language models like the one used in the chatbot, which can produce inaccurate and nonsensical answers due to being trained on vast amounts of internet text.

As chatbots like ChatGPT gain popularity, private companies have introduced their versions with varying degrees of success. Recently, Air Canada was ordered by a court to refund a customer after a chatbot provided incorrect information about the airline’s refund policy. Similarly, both TurboTax and H&R Block have faced criticism for their chatbots offering inaccurate tax preparation advice.

Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public, emphasized the importance of trust when it comes to government-promoted chatbots. “Public officials must weigh the potential repercussions if individuals were to follow erroneous advice and land in legal trouble,” West stated.

Experts recommend that cities using chatbots restrict their functions to a more limited range of inputs to lessen the risk of disseminating misinformation.

According to Ted Ross, the chief information officer in Los Angeles, the city meticulously curates the content used by its chatbots, which do not rely on extensive language models.

Suresh Venkatasubramanian, the director of the Center for Technological Responsibility, Reimagination, and Redesign at Brown University, believes that New York’s chatbot pitfalls should serve as a cautionary tale for other cities. “Cities must thoroughly evaluate the purpose of chatbots and the issues they aim to address,” he wrote in an email. “If chatbots are meant to replace human interaction, they may lose accountability without offering valuable returns.”

legal-document-to-plain-english-translator/”>Try Free Now: Legalese tool without registration

Find a LOCAL lawyer

Reference link