Try the political quiz
+

Filter by author

Narrow down the conversation to these participants:

72 Replies

 @ISIDEWITHDiscuss this answer...2yrs2Y

No

 @B53T9KFLiberalfrom Alberta  agreed…1yr1Y

AI can’t be persecuted as it isn’t sentient and is effectively just a computer. Sending the inventor to jail is wrong as they could have made a general-use AI that’s used for military applications, that accidentally killed an innocent.

 @9MG74RSfrom Ontario  agreed…2yrs2Y

If you just happen to fit a vague description of a target, hour would you feel? Would you trust a drone not to take you out just in case?

 @ISIDEWITHDiscuss this answer...2yrs2Y

Yes

 @9MG74RSfrom Ontario  disagreed…2yrs2Y

AI doesn't care about people. It doesn't truly understand. This is a slippery slope. Moreover it's still only as good as the training model that is human made.

 @BCMPCQPfrom Alberta  answered…1mo1MO

 @BBNCJTQ from Ontario  answered…2mos2MO

No, this would create an unfair advantage against other countries (And what if it becomes self aware like in the movies?).

 @BBDSHS4from Nova Scotia  answered…2mos2MO

artificial intelligence assistance is a foreseeable outcome, but full guidance and control of weaponry by AI should not be implemented.

 @B8X7GZYLiberalfrom Ontario  answered…5mos5MO

Yes, but not until they are made reliable and safe that something won't go catastrophically wrong with the system

 @B8Q2NFW from Alberta  answered…5mos5MO

Guidance technology is fine, but the final strike should be conducted by a human. The moral compass of AI is unclear.

 @B72J8JFfrom Alberta  answered…8mos8MO

no, there is a chance for the computer to select the wrong target which could lead to loss of civillian life or friendly fire

 @B4GBZQGfrom Alberta  answered…1yr1Y

No, the use of Artificial Intelligence in war removes the human morals to ending life and will lead humanity down a path of immoral destruction.

 @B4FBP59Liberalfrom Alberta  answered…1yr1Y

Yes, As long as the decision making on the launch and timing of the sort of attack or defence was coordinated and approved by a human and AI was used for intelligent guidance only.

 @B45XZ6Wfrom Florida  answered…1yr1Y

I don’t agree with any murder of anyone but I suppose it’s not a horrible thing to use AI for fighting over humans

 @B3QLCP3from Ontario  answered…1yr1Y

AI Technology is still relatively new, and should be developed more before applying it to weaponry and defense applications

 @B324X4Ffrom Ontario  answered…1yr1Y

I feel like the development of this technology comes with time and experimenting, so we should give it a few years.

 @9ZDCX9Tfrom Washington  answered…1yr1Y

No, there is not enough testing or information to say that an AI can distinguish civilians, military personnel and threats.

 @9XCDJK2from Alberta  answered…2yrs2Y

It depends how well the tech can be trusted, it would have to go through years and years of tests before official use since it may have ''Glitches''.

 @9WKLBWBfrom British Columbia  answered…2yrs2Y

yes, but only after trials, research and contingencies to ensure it is not a threat to ourselves.

 @9WK3LGVLiberalfrom British Columbia  answered…2yrs2Y

Who controls the AI, was it created by our government or a private company or worse by another country.

 @9WFR74Q from Alberta  answered…2yrs2Y

Absolutely not. It is vital that we as humans realize and understand the gravity of using weapons to harm other humans. AI may be able to provide functionality and statistics, but it cannot understand the weight of using weapons to harm. Humans themselves are flawed when it pertains to utilizing harmful weaponry, especially in a militaristic setting. It is a slippery slope to utilize AI for weapons.

 @9W2QDRFfrom Ontario  answered…2yrs2Y

 @9W222F2from Ontario  answered…2yrs2Y

 @9VY8CNNfrom British Columbia  answered…2yrs2Y

No, this could too easily lead to a doomsday scenario. Keep AI out of the military!

 @9VRM7F4from Alberta  answered…2yrs2Y

Yes, if there is a person to monitor the weapon incase of ai malfunction

 @9VJLT3Zfrom Alberta  answered…2yrs2Y

Its going to happen inevitably anyways. We need an AI ethics commission... because its going to be extremely dangerous if AI goes rogue or can be hijacked in any way.

 @9VJ6C4Kfrom British Columbia  answered…2yrs2Y

Yes but only if it is proven to not glitch or be in risk of being hacked.

 @9VF4NS9from Ontario  answered…2yrs2Y

Yes, but only is it is more accurate than it is under human's control.

 @9V7JKBZConservativefrom British Columbia  answered…2yrs2Y

 @9V4JT2Cfrom Alberta  answered…2yrs2Y

 @9TYFLTGfrom British Columbia  answered…2yrs2Y

 @9TTHJVPfrom Nova Scotia  answered…2yrs2Y

 @9TRP8FJfrom British Columbia  answered…2yrs2Y

Yes, but for defensive systems only. There should always be a human in the loop pulling the trigger for offensive systems.

 @9RBYBX6from Nova Scotia  answered…2yrs2Y

In belief that Artificial Intelligence will be a downfall to mankind, as most of the world will have access, it makes this question difficult to answer. It is important that Artificial Intelligence be used with caution.

 @9RBY87Rfrom Ontario  answered…2yrs2Y

 @9RBVDVTfrom Nova Scotia  answered…2yrs2Y

There should always be humans in the lethal force decision making process.

 @9QZCYDNfrom Ontario  answered…2yrs2Y

Not entirely guided. And also not complete total AI that can think for itself like a human. Otherwise I think it'd be effective

 @9QSV5BHfrom California  answered…2yrs2Y

Yes, as long as it is pretty much guaranteed they will not fail.. Like ever.

 @9QRJNMWfrom Ontario  answered…2yrs2Y

Yes, but only if there is always a human kept in the decision making loop.

 @9QQMHB3Liberalfrom Ontario  answered…2yrs2Y

 @9Q7YMJZfrom Ontario  answered…2yrs2Y

Yes, but only with appropriate oversight and against specific military targets

 @9PRH44Kanswered…2yrs2Y

Not a yes or no answer. There needs to be more clarification on whether the AI is making all decisions up to firing the weapon or just controlling it to the target.

 @9P8NRFMNew Democratic from Alberta  answered…2yrs2Y

Instead of artificial intelligence, military technology should have advanced programs/technology that can be controlled by professionals.

 @9P7NSTCfrom Alberta  answered…2yrs2Y

 @9LT2W3Wfrom Saskatchewan  answered…2yrs2Y

 @9LHXK8GConservativefrom Ontario  answered…2yrs2Y

Not at this time, and not until unbiased third parties review the technology further and there is more scientific consensus.

 @9LGCYKFfrom Ontario  answered…2yrs2Y

we will eventually just be creating insane fighting robots we would need to use nuculear weapons to destroy them just to be safe. this will definitely mislead wars and be way to unsafe.

 @B6XLYTPfrom New Brunswick  answered…8mos8MO

i think that maybe it could be for the better but also could mess up things in the end i think over time they should gradually introduce it but just too send it right in too it i think not

 @B6VS58Mfrom Northwest Territories  answered…8mos8MO

yes, but the weapon must be overlooked by a person and contain fail safes incase the AI fails or goes off course.

 @B5WJ3KYNew Democratic from Ontario  answered…11mos11MO

I'm taking a guess that rare people didn't watch the Terminator movie at last once. All jokes aside, only have AI for conventional weapons only.

 @B52Z2H3Green from Ontario  answered…1yr1Y

Yes, as long as the AI is highly protected by anti-hacking and anti-viruses with lots of encrypting to ensure there will be no hacking to change where the weapons are being guided

 @B4ND9T9Conservativefrom British Columbia  answered…1yr1Y

It is going to happen regardless of my opinion. This becomes a battle of politics, humanitarian efforts, war crimes.

 @B4N2F7Dfrom Alberta  answered…1yr1Y

No, artificial intelligence should not be a priority of usage in Canada, as we account for a large amount of the worlds freshwater; which AI uses to function.

 @B4KPLPHNew Democraticfrom Manitoba  answered…1yr1Y

If this number of Palestinian civillian casualties was reported by the Health Ministry of Gaza, then this number is unreliable and should not be assumed as fact. The Health Ministry of Gaza is operated by Hamas, and designated terrorist organization, which lies about the number of casualties to heighten emotional responses from the West. Hamas deliberately puts Palestinian civillians in harm's way by hiding in and under mosques, schools, and densely populated areas. In the current Israel-Hamas war, Israel has the lowest combatant to civillian ratio in any historical war.

 @B4J7TV9Liberalfrom Alberta  answered…1yr1Y

no, no weapons system should have no human oversight, an autonomous drone system is acceptable only if targets are designated by a human operator or a specfic set target

 @B2S4PY3from Alberta  answered…1yr1Y

After sufficient training time, to determine full impacts, limitations, and abilities of the technology.

 @B2L42TMfrom Ontario  answered…1yr1Y

Yes, if we send our soldiers to fight wars we should give them every advantage to reduce casualties and shorten the conflict

 @B2574KFNew Democraticfrom Manitoba  answered…1yr1Y

Only if AI is advanced enough, it has a mind of its own. Plus, it's computer based so I expect someone tinkering with the A

 @B224W3XPeople’sfrom Ontario  answered…1yr1Y

Make this question more specific. I.e. A.I guidance systems? AI simulated targets? what are we talking about here folks? lets get a little more specific.

 @9ZN65GCfrom Ontario  answered…1yr1Y

Yes, against my better judgement. We will need to use AI to stay abreast of our adversaries, or be left behind.

 @9TLDMJLfrom Ontario  answered…2yrs2Y

 @9TGDVKNIndependentfrom Alberta  answered…2yrs2Y

yes, but only when it is safe to use. if it's a ranged weapon where there aren't people who would be in front of it then by all means, AI doesn't have emotions they're more accurate and safer when operated correctly.

 @9TF5F5Zfrom Alberta  answered…2yrs2Y

I believe missile guidance systems should use AI, but AI should never choose where to target

 @9T6X9HJfrom Ontario  answered…2yrs2Y

NO, and even the current use of AI should be strictly supervised and limited by public decisions and not privatized.

 @9T6GQ6Ffrom New York  answered…2yrs2Y

The military must operate under the guidance of the revolutionary working class and only until the class divide is abolished worldwide.

 @9T2Z7Y5from Alberta  answered…2yrs2Y

Yes, but ethical research needs to be conducted and needs to be at the forefront of the militaries AI investment

 @9SNJQRWfrom Ontario  answered…2yrs2Y

Yes, but not fully AI. There needs to be constant human review and extensive research beforehand.

 @9MC4BQLfrom Alberta  answered…2yrs2Y

Depends on how good it's gotten. I'd have to see some damn good examples of it being better than human hands.

 @9LW6J33from Ontario  answered…2yrs2Y

Any weapon with the potential to kill or injure should not be 100% AI autonomous.

 @B6VS9TDfrom Alberta  answered…8mos8MO

Yes along it is highly protected from being anti-hacking and anti-viruses, to make sure the reverse doesn't happen where the weapons are directed towards us instead of our enemies. As well, the use of the AI weapons should be limited to a degree as if we lose the control of the AI or it gets to smart it can backfire upon us, because we have seen in recent movies how scary the evolution and development of AI has gotten.

Demographics

Loading the political themes of users that engaged with this discussion

Loading data...