The letter urges the United Nations (UN), which has convened an expert panel on the issue, to stop these technologies being repurposed into killing machines that can identify and attack a target without human intervention.
It asks the UN and its Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems "to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse and to avoid the destabilising effects of these technologies."
The GGE will consider whether such weapons should be added to the list of those banned or restricted under the Convention on Certain Conventional Weapons.
"Lethal autonomous weapons threaten to become the third revolution in warfare," the letter reads. "Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.
"These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora's box is opened, it will be hard to close."
The letter was launched today at the International Joint Conference on Artificial Intelligence in Melbourne, Australia – the same day that the GGE was due to hold its first meeting.
Also on the list of signatories is Mustafa Suleyman, the co-founder and head of applied AI at DeepMind, now owned by Google parent company Alphabet. The other 114 signatories represent robotics companies from 26 countries.
"As companies building the technologies in artificial intelligence and robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm," they said.
These kinds of devices would exceed the level of autonomy currently seen in weapons such as unmanned combat aerial vehicles, which are usually remotely controlled by a pilot. Some fly autonomously but still require a human to fire.
The UN has been considering the issue of what it terms lethal autonomous weapons systems (LAWS) since 2013, when they were raised in a report to the Human Rights Council by Christof Heyns, the former special rapporteur on extrajudicial, summary or arbitrary executions. A number of meetings have since taken place on the subject.
The last of these established the GGE, which is now set to meet in November following the postponement of its first session.
Musk has been vocal on the issue of artificial intelligence recently, and not only on its potential military applications. He recently called on governments to regulate the technology, which he termed an existential threat.