Google announced on Tuesday that it has overhauled the principle of managing artificial intelligence and other advanced technologies. The company is “technology that is likely to cause overall harm or cause”, “major purpose or implementation weapons or other technologies for directing or directly promoting people.” Deleted a language that promises not to pursue the technology to collect or use information. A monitoring that violates internationally accepted norms and a technology that is widely accepted in the principles of international law and human rights.
This change was disclosed in a memo added to the top of the 2018 blog post to announce the guidelines. “Updated the principle of AI. Please access AI.Google for the latest information.”
In a blog post on Tuesday, GOOGLE executives pairs are geopolitical fighting over AI as a “background” for the use, evolving standards, and Google's principles of AI. I quoted.
Google first published the principle in 2018 to suppress the internal protests over the decision of a company to work on US military drone programs. In response, we have also announced a set of principles that refuse to renew the government contract and lead the future use of advanced technology such as artificial intelligence. Among other measures, in principle, Google does not develop weapons, specific monitoring systems, or technologies that impair human rights.
But on Tuesday's announcement, Google abolished these commitments. The new web page does not list the banned set of Google AI Initiative. Instead, revised documents provide more space for Google to pursue potential sensitive use cases. Google has implemented “appropriate human surveillance, dudergers, and feedback mechanisms” in accordance with the widely accepted principles of user goals, social responsibilities, and the widely accepted principles of international law and human rights. Google also states that it works for “intentional or reduced harmful results.”
“I believe that democracy should lead AI development led by core values such as freedom, equality, and respect for human rights,” said James Manyika, a senior vice president of Google, technology, and society. Demis Hassabis, the CEO of Google Deepmind, is written. , AI Research Lillo, which is respected by the company. “And companies, governments, and organizations that share these values need to cooperate in protecting people, promoting global growth, and creating AI to support national security. Masu.”
They have added that Google will continue to focus on the AI project, which is “in line with our mission, scientific focus, specialized field, and continues to match the widely accepted principles of international law and human rights.”
Several Google employees have expressed concerns about changes in conversation with WIRED. “Despite the many years of employees' emotions that companies should not be involved in the business of war, Google has deeply deleted a commitment to ethical use of AI technology without opinion from employees and people. I'm worried, “said Parul Koul. Google Software Engineer and President of Alphabet Union Workers-Cwa.
Do you have a chip?
Are you a Google current or former employee? We look forward to your contact. Use a non-processed phone or computer to contact Paresh Dave of +1-415-565-1302 or Paresh_dave@wired.com, +1 785-813-1084 Kins@gmail. com
US President Donald Trump's inauguration last month has revised many companies to promote fairness and other liberal ideals. Alex Krasov, a Google spokesman, says this change is much longer.
Google is listing new goals to pursue a bold, responsible, shared AI Initiative. There are no phrases such as “socially beneficial”, and “scientific outstanding” has been maintained. The additional mention is “respect intellectual property rights.”