Users mock Microsoft for woke Word feature
Microsoft users are criticizing the tech giant for a Word feature that corrects phrases for not being “inclusive.”
Netizens have been sharing screenshots of suggestions by Microsoft Word when they write words like “maternity leave.”
“This term may not be inclusive of all genders,” the program warns, suggesting alternative phrases like “birth-related leave,” “parental leave,” or “childbirth leave.” When users write “paternity leave,” Word suggests “child-bonding leave.”
The woke feature was rolled out in 2019 and is installed on all Microsoft applications, but users were recently taken aback by the feature’s attempt to police the words “maternity” and “paternity.”
“This is a particularly insidious form of language policing, reminiscent of 1984,” Free Speech Union Director Toby Young told The Telegraph. “It's as though there's a censor in your computer scolding you for departing from politically correct orthodoxy.”
Microsoft is known for trying to force its users to embrace woke ideologies.
Users held accountable for carbon emissions
Last year, the megacorporation rolled out a “carbon aware console” for Xbox which forces gamers to fight climate change. One feature of the console is that it no longer self-schedules nightly maintenance windows between 2 AM and 6 AM, but rather during a time “when it can use the most renewable energy in your local energy grid.”
The gaming giant also automatically updates systems to “Shutdown (energy saving)” mode. Unlike Sleep mode, Shutdown mode ensures the system takes longer to boot up and users are unable to turn on the system remotely. But on the upside, says the company, it uses 20 times less power than Sleep mode.
“Xbox is working to reduce our environmental impact to help us reach Microsoft’s goal of being a carbon negative, water positive, and zero waste company by 2030 by rethinking how we design, build, distribute, and use our products,” wrote Xbox.
“We not only hold ourselves accountable to the carbon emissions in the production and distribution of our products, but to the emissions created with the use of our products in the homes of our fans as well. So, the way we design our hardware and software to be more efficient and optimized for renewable energy can have a big impact.”
Using AI to police ‘harmful content’
In May 2023, Microsoft launched an AI-powered censorship tool called Azure Content Safety, which applies image recognition algorithms and AI language learning models to scour images and text for “harmful” content. The offending text or image will be placed into one of four categories: sexual, violent, self-harm, or hate, and will be assigned a severity score between one and six.
While part of Microsoft’s Azure product line, Content Safety is designed as standalone software that third parties can use to police their own spaces such as gaming sites, social media platforms or chat forums. It understands 20 languages, along with the nuance and context used in each.
Microsoft assures users that the product was programmed by “fairness experts” who defined what constitutes “harmful content.”
“We have a team of linguistic and fairness experts that worked to define the guidelines taking into account cultural, language and context,” a Microsoft spokesperson told TechCrunch. “We then trained the AI models to reflect these guidelines. . . . AI will always make some mistakes, so for applications that require errors to be nearly non-existent we recommend using a human-in-the-loop to verify results.”