Don't Be Evil: Google Drops Pentagon's A.I Contract, Create Censored Search Engine For China

This reminds me of how people have a false sense of security because the majority of the population is vaccinated so they don't get vaccinated and then... disease spreads. Of course were safe from terrorism because were cooperating with governments in every part of the world and are constantly killing terrorists. You think ISIS and al-Qaeda aren't capable of more attacks?

There was no ISIS when we declared war on terror.

ISIS came about after we created a power vaccuum with illegal wars that killed hundreds of thousands and overthrew the strong men that kept the locals in line.

But lets pretend that American military intervention wasn't pivotal in creating ISIS...


On what planet is ISIS even coming into existence considered anything other than a failure of the war on terror?

It's like you've set up a force to kill Hitler and not only have you failed (Al Qaeda is still around) there's now Super Hitler as well (ISIS).

At what point do you actually realize that killing innocent people in order to kill people who are hate you isn't resulting in any less people that hate you?

Super Duper Hitler?
UberHitler?

My country had 0 islamic terrorist attacks before the war on terror. Now there's been like 3 and a few thwarted ones.

It's not surprising when the response isn't equivalent to the threat. My country has killed hundreds if not thousands of muslims. Most famously dropping some bombs on that Syrian army base that had 83 people in it.

We kill more innocent people than terrorists do. It's not really surprising that a few of the 400,000 muslims we have think we're not that crash hot.

I'm not saying we shouldn't try to thwart terrorist attacks and terrorist attacks should be 5-6 rather than three in my country. That's stupid.

I'm saying killing innocent people being an acceptable consequence to something that kills less people than bunk beds is not only evil but stupid to boot and if we had a proportional response to the threat that number in my country would likely be 0.




The difference between this and immunizations is we have undoubtedly measured success with Immunizations. Life expectancy went up massively which is hardly surprising when infant and child deaths decreased.

This war on terror has had little to no effect.

It goes back to that hazard thing. Child illnesses killed much more people than terrorism so unsurprisingly there is a much better result in the response to fix it. It's like if you have a kid who's getting 98% in his math test and 74% in their english class it makes sense to focus on his english grades.

At best you can only get a 2% improvement on that maths score.

2016 had the most deaths in America from terrorists since 9/11.

If you stopped all of them you'd have saved 79 lives that year.

It's not worth killing thousands of innocent people per year.
 
good for the employees for voicing their opinion: that is what makes america great.

personally i agree.. the business of war is the business of death and i would not want to be involved in that at any level. However im sure google will end up doing whatever is in its own best interest in the short-term just as most companies do.
 
Inside Google, a Debate Rages:
Should It Sell Artificial Intelligence to the U.S Military?

By Mark Bergen | May 14, 2018

1200x-1.jpg

Last July, 13 U.S. military commanders and technology executives met at the Pentagon's Silicon Valley outpost, two miles from Google headquarters. It was the second meeting of an advisory board set up in 2016 to counsel the military on ways to apply technology to the battlefield. Milo Medin, a Google vice president, turned the conversation to using artificial intelligence in war games. Eric Schmidt, Google’s former boss, proposed using that tactic to map out strategies for standoffs with China over the next 20 years.

A few months later, the Defense Department hired Google’s cloud division to work on Project Maven, a sweeping effort to enhance its surveillance drones with technology that helps machines think and see.

The pact could generate millions in revenue for Alphabet Inc.’s internet giant. But inside a company whose employees largely reflect the liberal sensibilities of the San Francisco Bay Area, the contract is about as popular as President Donald Trump. Not since 2010, when Google retreated from China after clashing with state censors, has an issue so roiled the rank and file. Almost 4,000 Google employees, out of an Alphabet total of 85,000, signed a letter asking Google Chief Executive Officer Sundar Pichai to nix the Project Maven contract and halt all work in “the business of war.”

The petition cites Google’s history of avoiding military work and its famous “do no evil” slogan. One of Alphabet's AI research labs has even distanced itself from the project. Employees against the deal see it as an unacceptable link with a U.S. administration many oppose and an unnerving first step toward autonomous killing machines. About a dozen staff are resigning in protest over the company’s continued involvement in Maven, Gizmodo reported on Monday.

The internal backlash, which coincides with a broader outcry over how Silicon Valley uses data and technology, has prompted Pichai to act. He and his lieutenants are drafting ethical principles to guide the deployment of Google's powerful AI tech, according to people familiar with the plans. That will shape its future work. Google is one of several companies vying for a Pentagon cloud contract worth at least $10 billion. A Google spokesman declined to say whether that has changed in light of the internal strife over military work.

Pichai’s challenge is to find a way of reconciling Google’s dovish roots with its future. Having spent more than a decade developing the industry’s most formidable arsenal of AI research and abilities, Google is keen to wed those advances to its fast-growing cloud-computing business. Rivals are rushing to cut deals with the government, which spends billions of dollars a year on all things cloud. No government entity spends more on such technology than the military. Medin and Alphabet director Schmidt, who both sit on the Pentagon’s Defense Innovation Board, have pushed Google to work with the government on counter-terrorism, cybersecurity, telecommunications and more.

To dominate the cloud business and fulfill Pichai’s dream of becoming an “AI-first company,” Google will find it hard to avoid the business of war.

800x-1.png

Inside the company there is no greater advocate of working with the government than Google Cloud chief Diane Greene. In a March interview, she defended the Pentagon partnership and said it's wrong to characterize Project Maven as a turning point. “Google’s been working with the government for a long time,” she said.

The Pentagon created Project Maven about a year ago to analyze mounds of surveillance data. Greene said her division won only a “tiny piece” of the contract, without providing specifics. She described Google's role in benign terms: scanning drone footage for landmines, say, and then flagging them to military personnel. “Saving lives kind of things,” Greene said. The software isn’t used to identify targets or to make any attack decisions, Google says.

Many employees deem her rationalizations unpersuasive. Even members of the AI team have voiced objections, saying they fear working with the Pentagon will damage relations with consumers and Google’s ability to recruit. At the company’s I/O developer conference last week, Greene told Bloomberg News the issue had absorbed much of her time over the last three months.

Googlers’ discomfort with using AI in warfare is longstanding. AI chief Jeff Dean revealed at the I/O conference that he signed an open letter back in 2015 opposing the use of AI in autonomous weapons. Providing the military with Gmail, which has AI capabilities, is fine, but it gets more complex in other cases, Dean said. “Obviously there’s a continuum of decisions we want to make as a company,” he said. Last year, several executives—including Demis Hassabis and Mustafa Suleyman, who run Alphabet’s DeepMind AI lab, and famed AI researcher Geoffrey Hinton—signed a letter to the United Nations outlining their concerns.

“Lethal autonomous weapons … [will] permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the letter reads. “We do not have long to act.” London-based DeepMind assured staff it’s not involved in Project Maven, according to a person familiar with the decision. A DeepMind spokeswoman declined to comment.

Richard Moyes, director of Article 36, a non-profit focused on weapons, is cautious about pledges from companies that humans—not machines—will still make lethal decisions. “This could be a stepping stone to giving those machines greater capacity to make determination of what is or what’s not a target,” he said. Moyes, a partner of the DeepMind Ethics & Society group, hasn’t spoken to Google or DeepMind about the Pentagon project.

800x-1.jpg

Sundar Pichai​

AI military systems have already made mistakes. Nighat Dad, director of the Digital Rights Foundation, cites the case of two Al Jazeera reporters who filed legal complaints that they were erroneously placed on a drone “kill list” by the U.S. government’s Skynet surveillance system. Dad sent a letter in April to Pichai asking Google to end the Project Maven contract, but says she hasn’t received a reply.

The primary concern for some AI experts is that the existing technology is still unreliable and could be commandeered by hackers to make battlefield decisions. “I wouldn’t trust any software to make mission-critical decisions,” says Gary Marcus, an AI researcher at New York University. Project Maven, Marcus says, falls into an ethical “gray area” since the public doesn’t know how the software will be used. “If Google wants to get in the business of doing classified things for the military, then the public has the right to be concerned about what kind of company Google is becoming,” he says. Google's cloud division is not certified to work on classified projects. A Google spokesman declined to say if the company will purse that certification.

For many years, Google typically exited the government contracts of companies it acquired. In 2011, the year Google bought it, facial recognition startup Pittsburgh Pattern Recognition billed the U.S. $679,910, according to Bloomberg Government data. The next year, Google’s revenue from the U.S. government amounted to less than that. (These figures exclude military spending on Google ads, which are classified numbers and likely equal many millions of dollars a year.) Robot maker Boston Dynamics generated more than $150 million in federal contracts over 13 years before being bought by Google in late 2013. The next year, the contracts ended. (Google agreed to sell Boston Dynamics in 2017).

800x-1.png


Since Greene was recruited to run its cloud unit in 2015, Google has become less squeamish about government work. Last year, federal agencies spent more than $6 billion on unclassified cloud contracts, according to Bloomberg Government. About a third of that came from the Defense Department. Right now Amazon.com Inc., Microsoft Corp. and Oracle Corp. are big players. Amazon’s cloud business alone has generated $600 million in classified work with the Central Intelligence Agency since 2014, Bloomberg Government data show.

Greene is determined to compete for such contracts. “We will work with governments because governments need a lot of digital technology,” she said in the March interview. “What’s new, and what we’re having a lot of discussion around, is artificial intelligence.”

After initially wavering on the need for specific AI policies, the Trump Administration is now moving to embrace the technology—a shift driven largely by the looming competitive threat from China and Russia. On April 2, Project Maven received an additional $100 million in government funding. Military officials have cast the program as a key way to reduce time-consuming tasks and make warfare more efficient.

“We can confirm Project Maven involves working with a number of different vendors, and DoD representatives regularly meet with various companies to discuss progress with ongoing projects,” said Defense Department spokeswoman Maj. Audricia Harris. “These internal deliberations are a private matter, therefore it would be inappropriate to provide further details.” ECS Federal, the contractor paying Google for the Project Maven work, didn’t respond to requests for comment.

Defense Secretary James Mattis visited Google in August and met with Pichai, Greene and co-founder Sergey Brin. They discussed the company’s cloud and AI advances as well as other opportunities, such as finding new ways to share telecom spectrum owned by the military, another Google project. (Schmidt, who stepped down as Alphabet chairman in December, recently told Defense One that he’s excluded from decision-making about any Google work with the Pentagon. Notes from the July meeting of the Defense Innovation Board were made public online.) Mattis also visited Microsoft and Amazon during the trip.

Some Google executives consider warmer ties with the government long overdue. Five years ago, relations were strained after Google vocally objected to revelations, uncovered by Edward Snowden, that the National Security Agency had tapped the company's networks. A senior executive involved in recent talks says one objective was to avoid the kind of “pissing contest” between Google and the government that happened after Snowden’s revelations.

But the divide inside the company will not be easily overcome. At several Google-wide meetings since March, Greene and other executives were peppered with questions about the merits of Project Maven. One says a recent justification for moving ahead amounted to: If we don’t do this, a less-scrupulous rival will. “The argument they've been using is terrible,” this person says. Another employee says the anti-Project Maven petition, reported earlier by The New York Times, is one of the largest in the history of the company, which is famous for encouraging internal debate. Gizmodo first reported Google staff concern about the company’s involvement. Pichai has addressed the issue with employees, but has yet to answer their demand to cancel the contract.

Google’s CEO didn’t mention the military deal at the I/O conference. Several executives there said privately that they trusted Pichai to make the appropriate decision. The deal didn’t come up at the event’s marquee session on AI either. Fei-Fei Li, who runs AI for Google Cloud, made a passing mention of ethics. “We talk a lot about building benevolent technology,” she said. “Our technology reflects our values.” Greene, sitting next to her on stage, nodded in agreement.

https://www.bloomberg.com/news/arti...-sell-artificial-intelligence-to-the-military
 
good for the employees for voicing their opinion: that is what makes america great.

personally i agree.. the business of war is the business of death and i would not want to be involved in that at any level. However im sure google will end up doing whatever is in its own best interest in the short-term just as most companies do.

If it wasn't for the business of war, there would be no America, so you are involved in it.

There's no question if Google will share their information with the government. They are legally obligated to.
 
I'm not on board with Google producing military technology... but wouldn't improving the targeting of drones cut down on the accidental killing of bystanders, etc?
 
I work in a Technical field in support of the US Military. We have a College a few miles south of where I work. There is a bar closer to the College that I used to frequent. Through the years I've heard countless students of academia tell me how our military is evil or is led by greedy interests and that we kill innocents, etc etc. Then I invariably meet some of the same people years later in meetings for platform sustainment, mission software development, or test planning. The real world inevitably slaps these young idealists in the face. Google should send out a survey to see what employees support this. If an overwhelming # support it then drop the contracts. Then fire half of the people who supported the measure citing the decrease in revenue as the reason. That would teach them that they can have an opinion and cling to ideals but you NEVER shit where you eat because there are consequences.

From what I gathered an estimated 3,100 signatures were collected on a form (statement) and this is a small percentage of google. Personal stories like the one you shared never cease to amaze me on sherdog. In my experience college tends to build managers and later in life they become proficient leaders.
 
Those google employees are always free to find another job if working for a company that does work for the military is against their ideals.
 
Thousands of Google employees have pleaded in a letter to CEO Sundar Pichai to stop providing technology to the Pentagon that could be used to improve the accuracy of drone attacks.
So these brainiacs prefer inaccurate drone attacks?
 
A dozen Google employees quit over military drone project
By Ron Amadeo - 5/15/2018

MQ-9-800x532.jpg

Despite protests from employees, Google is still charging ahead with a Department of Defense collaboration to produce machine-learning software for drones. Google hasn't listened to a contingent of its employees that is unhappy with Google's involvement in the military-industrial complex, and now a report from Gizmodo says "about a dozen" employees have resigned over the issue.

The controversial program, called "Project Maven," sees Google applying its usual machine-learning and image-recognition expertise to millions of hours of drone footage collected by the military. The goal is to identify people and objects of interest. While a Google spokesperson says the program is "scoped for non-offensive purposes," a letter signed by almost 4,000 Google employees took issue with this assurance, saying, "The technology is being built for the military, and once it's delivered, it could easily be used to assist in [lethal] tasks."

The petition asked that Google immediately cancel the project, saying, "We believe that Google should not be in the business of war."

Opposition to the project isn't just coming from inside Google. An open letter from the International Committee for Robotics Arms Control expressed solidarity with the protesting Google employees and was signed by over 200 researchers and academics in artificial intelligence. The letter says Google should "commit to not weaponizing its technology" and terminate its contract with the DoD.

“If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed," the letter reads, "we can say with certainty that no topic deserves more sober reflection—no technology has higher stakes—than algorithms meant to target and kill at a distance and without public accountability.”

One resigning employee questioned why Google is even bothering with such a controversial program when it is already so massive. “It’s not like Google is this little machine-learning startup that’s trying to find clients in different industries," the anonymous employee told Gizmodo. "It just seems like it makes sense for Google and Google’s reputation to stay out of that.”

“Actions speak louder than words, and that’s a standard I hold myself to as well,” another resigning employee told Gizmodo. “I wasn’t happy just voicing my concerns internally. The strongest possible statement I could take against this was to leave.”

https://arstechnica.com/gadgets/201...n-in-protest-of-googlepentagon-drone-program/
 
I wonder if they are also against google technology being used to spy on people
 
Google is taking on the maven project to avoid an anti trust suit. They gonna go after Facebook instead.

Mark my words Facebook will be hit with an anti trust suit by the end of this year.

Ad revenue companies are going to be disrupted big time soon. Basic attention token is also going to be a part of it.
 
I'm not on board with Google producing military technology... but wouldn't improving the targeting of drones cut down on the accidental killing of bystanders, etc?

I think you're confusing accidental with arbitrary.
Advancing drone warfare further removes humanity and accountability from warfare. Not a good thing.
 
Google Will Not Renew Pentagon Contract That Upset Employees
By Daisuke Wakabayashi and Scott Shane | June 1, 2018

merlin_125786690_63c7d7ce-6111-4a5c-9dc9-d42189e3b937-articleLarge.jpg

After employees protested, a Google executive said Friday that the company will not renew a contract to work on artificial intelligence with the Pentagon after it expires next year.

SAN FRANCISCO — Google, hoping to head off a rebellion by employees upset that the technology they were working on could be used for lethal purposes, will not renew a contract with the Pentagon for artificial intelligence work when a current deal expires next year.

Diane Greene, who is the head of the Google Cloud business that won a contract with the Pentagon’s Project Maven, said during a weekly meeting with employees on Friday that the company was backing away from its A.I. work with the military, according to a person familiar with the discussion but not permitted to speak publicly about it.

Google’s work with the Defense Department on the Maven program, which uses artificial intelligence to interpret video images and could be used to improve the targeting of drone strikes, roiled the internet giant’s work force. Many of the company’s top A.I. researchers, in particular, worried that the contract was the first step toward using the nascent technology in advanced weapons.

But it is not unusual for Silicon Valley’s big companies to have deep military ties. And the internal dissent over Maven stands in contrast to Google’s biggest competitors for selling cloud-computing services — Amazon.com and Microsoft — which have aggressively pursued Pentagon contracts without pushback from their employees.

Google’s self-image is different — it once had a motto of “don’t be evil.” A number of its top technical talent said the internet company was betraying its idealistic principles, even as its business-minded officials worried that the protests would damage its chances to secure more business from the Defense Department.

About 4,000 Google employees signed a petition demanding “a clear policy stating that neither Google nor its contractors will ever build warfare technology.” A handful of employees also resigned in protest, while some were openly advocating the company to cancel the Maven contract.

Months before it became public, senior Google officials were worried about how the Maven contract would be perceived inside and outside the company, The New York Times reported this week. By courting business with the Pentagon, they risked angering a number of the company’s highly regarded A.I. researchers, who had vowed that their work would not become militarized.

Jim Mattis, the defense secretary, had reached out to tech companies and sought their support and cooperation as the Pentagon makes artificial intelligence a centerpiece of its weapons strategy. The decision made by Google on Friday is a setback to that outreach.

But if Google drops out of some or all of the competition to sell the software that will guide future weaponry, the Pentagon is likely to find plenty of other companies happy to take the lucrative business. A Defense Department spokeswoman did not reply to a request for comment on Friday.

Ms. Greene’s comments were reported earlier by Gizmodo.

The money for Google in the Project Maven contract was never large by the standards of a company with revenue of $110 billion last year — $9 million, one official told employees, or a possible $15 million over 18 months, according to an internal email.

But some company officials saw it as an opening to much greater revenue down the road. In an email last September, a Google official in Washington told colleagues she expected Maven to grow into a $250 million-a-year project, and eventually it could have helped open the door to contracts worth far more; notably a multiyear, multibillion-dollar cloud computing project called JEDI, or Joint Enterprise Defense Infrastructure.

Whether Google’s Maven decision is a short-term reaction to employee protests and adverse news coverage or reflects a more sweeping strategy not to pursue military work is unclear. The question of whether a particular contract contributes to warfare does not always have a simple answer.

When the Maven work came under fire inside Google, company officials asserted that it was not “offensive” in nature. But Maven is using the company’s artificial intelligence software to improve the sorting and analysis of imagery from drones, and some drones rely on such analysis to identify human targets for lethal missile shots.

Google management had told employees that it would produce a set of principles to guide its choices in the use of artificial intelligence for defense and intelligence contracting. At Friday’s meeting, Ms. Greene said the company was expected to announce those guidelines next week.

Google has already said that the new artificial intelligence principles under development precluded the use of A.I. in weaponry. But it was unclear how such a prohibition would be applied in practice and whether it would affect Google’s pursuit of the JEDI contract.

Defense Department officials are themselves wrestling with the complexity of their move into cloud computing and artificial intelligence. Critics have questioned the proposal to give the entire JEDI contract, which could extend for 10 years, to a single vendor. This week, officials announced they were slowing the contracting process down.

Dana White, the Pentagon spokeswoman, said this week that the JEDI contract had drawn “incredible interest” and more than 1,000 responses to a draft request for proposals. But she said officials wanted to take their time.

”So, we are working on it, but it’s important that we don’t rush toward failure,” Ms. White said. “This is different for us. We have a lot more players in it. This is something different from some of our other acquisition programs because we do have a great deal of commercial interest.”

Ms. Greene said the company probably would not have sought the Maven work if company officials had anticipated the criticism, according to notes on Ms. Greene’s remarks taken by a Google employee and shared with The Times.

Another person who watched the meeting added that Ms. Greene said Maven had been “terrible for Google” and that the decision to pursue the contract was done when Google was more aggressively going after military work.

Google does other, more innocuous business with the Pentagon, including military advertising on Google properties and Google’s ad platform, as well as providing web apps like email.

Meredith Whittaker, a Google A.I. researcher who was openly critical of the Maven work, wrote on Twitter that she was “incredibly happy about this decision, and have a deep respect for the many people who worked and risked to make it happen. Google should not be in the business of war.”



Even though the internal protest has carried on for months, there was no indication that employee criticism of the deal was dying down.

Earlier this week, one Google engineer — on the company’s internal message boards — proposed the idea of employees protesting Google Cloud’s conference at the Moscone Center in San Francisco in July with a campaign called “Occupy Moscone Center,” fashioned after the Occupy Wall Street protests.

That engineer resigned from the company this week in protest of Maven and planned for Friday to be his last day. But he said he was told on Friday morning to leave immediately, according to an email viewed by The Times.

Peter W. Singer, who studies war and technology at New America, a Washington research group, said many of the tools the Pentagon was seeking were “neither inherently military nor inherently civilian.” He added, “This is not cannons and ballistic missiles.” The same software that speeds through video shot with armed drones can be used to study customers in fast-food restaurants or movements on a factory floor.

Mr. Singer also said he thought Google employees who denounced Maven were somewhat naïve, because Google’s search engine and the video platform of its YouTube division have been used for years by warriors of many countries, as well as Al Qaeda and the Islamic State.

“They may want to act like they’re not in the business of war, but the business of war long ago came to them,” said Mr. Singer, author of a book examining such issues called “LikeWar,” scheduled for publication in the fall.

https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html
 
Last edited:
Are you against making our military strikes more precise so we can get the bad guys with less collateral damage though?

These employees argued that Google is going against its "Don't Be Evil" motto just by working with the U.S military. Personally, I do not think helping to reduce civilian casualty on the battlefield is evil.

Now, if these employees of a U.S company believe that their country's arm forces is inherently evil, therefore working with the armed forces is equate to helping evil, then that would explains a lot about their stance.
Or, because they're the ones working on it, they have insight into other ways their technology may be used, perhaps?
 
Or, because they're the ones working on it, they have ideas about how the tech could be used in less innocuous ways?

Sounds to me like you didn't even bothered to read their letter before engaging in a discussion about their letter.

These 3000+ employees don't want to see Google tech in military applications, period. It doesn't matter if it's used for good or bad, Google's involvement with the U.S Armed Forces, or as these employees call it, "the business of war", is simply something they're against in principle.
 
Last edited:
Sounds to me like you didn't even bothered to read their letter before engaging in a discussion about their letter.

These 3000+ employees don't want to see Google tech in military applications, period. It doesn't matter if it's good or bad, it's just something they're against in principle.
This letter?
"...a letter signed by almost 4,000 Google employees took issue with this assurance, saying, "The technology is being built for the military, and once it's delivered, it could easily be used to assist in [lethal] tasks."

The petition asked that Google immediately cancel the project, saying, "We believe that Google should not be in the business of war.""

Or this letter,
"Opposition to the project isn't just coming from inside Google. An open letter from the International Committee for Robotics Arms Control expressed solidarity with the protesting Google employees and was signed by over 200 researchers and academics in artificial intelligence. The letter says Google should "commit to not weaponizing its technology" and terminate its contract with the DoD.""?

Sounds to me like you didn't read your own sources.
 
Are you against making our military strikes more precise so we can get the bad guys with less collateral damage though?

These employees argued that Google is going against its "Don't Be Evil" motto just by working with the U.S military. Personally, I do not think helping to reduce civilian casualty on the battlefield is evil.

Now, if these employees of a U.S company believe that their country's arm forces is inherently evil, therefore working with the armed forces is equate to helping evil, then that would explains a lot about their stance.

Or, because they're the ones working on it, they have insight into other ways their technology may be used, perhaps?

Sounds to me like you didn't even bothered to read their letter before engaging in a discussion about their letter.

These 3000+ employees don't want to see Google tech in military applications, period. It doesn't matter if it's used for good or bad, Google's involvement with the U.S Armed Forces, or as these employees call it, "the business of war", is simply something they're against in principle.

This letter?
"...a letter signed by almost 4,000 Google employees took issue with this assurance, saying, "The technology is being built for the military, and once it's delivered, it could easily be used to assist in [lethal] tasks."

The petition asked that Google immediately cancel the project, saying, "We believe that Google should not be in the business of war.""

Or this letter,
"Opposition to the project isn't just coming from inside Google. An open letter from the International Committee for Robotics Arms Control expressed solidarity with the protesting Google employees and was signed by over 200 researchers and academics in artificial intelligence. The letter says Google should "commit to not weaponizing its technology" and terminate its contract with the DoD.""?

Sounds to me like you didn't read your own sources.

...and now we're back to square one.

Tell us, what's "less innocuous" about more accurate drone "lethal tasks" against their intended military target while minimizing civilian collateral damage?

If precision military drone strikes that leaves innocent bystanders unharmed is the epitome of the "evil" that they fears will rise from this collaboration, I'm all for it.
 
Last edited:
Drones are the biblical locusts. Good times and good luck.
 
Google has been a government asset for quite some time and some say the CIA helped its early development. The military will get the technology either way and Google is a for profit business that will take any goverment contract, look at Amazon's absurd growth this year. That's good that employees are expressing their opinion but I would say their moral dilemma should have started with the decision to make AI in the first place.
 
Back
Top