Try the political quiz
+

Filter by author

Narrow down the conversation to these participants:

97 Replies

 @ISIDEWITHDiscuss this answer...9mos9MO

No

 @9VFDL8Qfrom Quebec  agreed…4mos4MO

Ai is too heavily relied on in current day history. I feel that if Ai was to turn sentient and turn on us, we would have no counter to them, ultimately rendering us useless to the enemy, and being wiped out. Now, if they were to gain power to our defense weapons, like missiles, transport and nuclear arsenal, they could make the whole world turn on us, or we could be blown off the face of the earth.

 @ISIDEWITHDiscuss this answer...9mos9MO

Yes

 @B2HWGJZfrom Saskatchewan  disagreed…2wks2W

AI can be extremely dangerous and it is not an inefficient way to use resources. The amount of resources required to run AI is terrible and will have major effects

 @9VFDL8Qfrom Quebec  disagreed…4mos4MO

Watch Terminator series, Avengers age of ultron, I have no mouth but I must scream, Irobot, and literally every other movie where the Ai turns sentient and turns into the humans.

 @9MW6R3Z from Ontario  disagreed…9mos9MO

The development general AI is an existential threat to humanity. If we do not regulate AI early the risk of this threat becoming reality increases. If General AI is not regulated where its parameters prevents violence. There is no telling what it will do or what it is capable of.

 @ISIDEWITHasked…5mos5MO

Do you think letting machines make life-and-death decisions in military conflicts is a necessary step forward or does it cross an ethical line?

 @9VNPZW2 from Ontario  answered…4mos4MO

I believe it crosses an ethical line definitely.

To me it makes sense for humans to have to make the decision when it comes to the killing and destruction of other humans.

Due to things such as; Empathy, Morality, and Emotion.

Things an Artificial Intelligence does not have.

Granted, things like that could be PROGRAMMED into the A.I, but the A.I will never be able to come up with such things itself, and eventually, due to flaws in programming/a loophole the designers did not see, a catastrophe could pretty easily happen.

Yeah. I think it crosses an ethical line.
A.I has no personal way to tell when to stop. Only that it needs to get from point A to point B.

And who knows what it might do to get there?.

 @9VMX3HHfrom Ontario  answered…4mos4MO

 @9VGX7TCConservativefrom Nova Scotia  answered…4mos4MO

 @9VFTSSSfrom Nova Scotia  answered…4mos4MO

 @B2S3FP5from Alberta  answered…2 days2D

yes, but only for prediction on if things may be coming, AI should not have control over any weapons

 @B2RD6TTfrom Alberta  answered…3 days3D

yes, but the access it has should be restricted to suggestions. AI should not have the ability to send nuclear missiles in the case of a misidentification.

 @B2PT6BZfrom Ontario  answered…6 days6D

No, AI cannot be trusted with the biases inherent in its programming. This would be giving power to a discriminatory program instead of relying on human controls.

 @B2DQ5HFfrom Quebec  answered…3wks3W

No, my life has been altered because I've been lied about by retarded losers. They call it AI, but they you should really study how these people live and act.

 @B2CMT6Q  from Montana  answered…4wks4W

No, artificial intelligence shouldn't be used to make important and complex military/security decisions.

 @B2BPTZWfrom Ontario  answered…4wks4W

It can be useful. However, the use of technology for strictly confidential purposes can be tampered with.

 @B2BN37Hfrom Newfoundland  answered…4wks4W

Not at the moment because, while AI is advanced and is advancing as time goes by, I feel like it’s not advanced enough yet.

 @B2B2C59from Alberta  answered…4wks4W

It depends, it would really be helpful for AI to take care of defence applications so people can work on other problems but at the same time but it might be better if they are applied onto small defence applications at first fro testing

 @B286ZK4New Democraticfrom Alberta  answered…1mo1MO

Yes, but only if humans have the ability to override these systems in cases of technological corruption (damaged files, hacking, etc.)

 @9W6MXFYfrom British Columbia  answered…4mos4MO

No, at the end of the day AI can have false positives and needs to be constantly kept maintained by humans so might as well just use humans.

 @9W5TSDNfrom Ontario  answered…4mos4MO

Some support investing in AI for defense to enhance national security and efficiency, while others express concerns about ethical implications, potential misuse, and the risks of autonomous weaponry.

 @9W45G3Yfrom British Columbia  answered…4mos4MO

it depends but overall we should be using this as people not for defensive purposes, ai should only be used for learning

 @9W2W4QNanswered…4mos4MO

On one hand Canada will have to stay relevant with other countries however this could lead to an abuse of power. I am undecided at this time as do not understand enough about AI.

 @9VWVZQBfrom British Columbia  answered…4mos4MO

Yes, but should not be replacing any jobs. It should be a research tool opposed to a replacement to human input. It's not powerful enough yet in my eyes

 @9VSPPZLfrom Saskatchewan  answered…4mos4MO

i believe that if the government were to invest in (ai) they should add restrictions and be extremely cautious and at some point take hypothetical scenarios into account. but overall (ai) should be used for everything in term of research and methods on how to possibly do things such as agriculture and environmental safety.

 @9VMWZRKfrom Manitoba  answered…4mos4MO

Artificial Intelligence Should be abolished as it is dangerous and can be easily manipulated, human intelligence is superior. .

 @9VMJ457from Quebec  answered…4mos4MO

Only if not doing so poses a threat to national security -- and not for any other purpose besides national security

 @9VCJBMDfrom British Columbia  answered…4mos4MO

Yes, but only so we don't get far behind in military technology. And as long as it can be overridden easily.

 @9V9JVX3from Ontario  answered…4mos4MO

I believe it should be utilized, and maintained by the proper people of power, and be used for good against humanity. For example: in wars to be used against certain gases, and more.

 @9V9BQ4Cfrom Alberta  answered…4mos4MO

There are good and bad qualities to artificial intelligence so it all depends on the applications of the AI and where the spending goes towards it.

 @9TZKMBHfrom Alberta  answered…4mos4MO

Other countries may use it, but it is certainly a scary idea, and I wouldn't want to be without it if they attack us with it, so yes and no.

 @9TZHZWZfrom Alberta  answered…4mos4MO

Artificial Intelligence is good to invest in, unless it is used to protect us. I trust real people with morals rather than a robot with my safety and security.

 @9TY279Wfrom New Brunswick  answered…5mos5MO

Yes, as long as the systems are tested on a regular basis to lessen mechanical error as much as possible. Also, don't set up the mostly deadly mechanisms, for example, nuclear missiles.

 @9TWK4RLConservativefrom Ontario  answered…5mos5MO

Defense applications in the sense of the Canadian military being able to use it to identify threats from foreign countries yes.

 @9TV56S7Liberalfrom Alberta  answered…5mos5MO

yes but they need to understand that AI cannot be used for every single defense. we still need to make our own solutions but i can see how AI can assist to improve defense plans

 @9TPFV66from Ontario  answered…5mos5MO

it would advance our technology and we can use it for defense, but i feel like they are gaining way too much power

 @9TP8MJSfrom Ontario  answered…5mos5MO

Yes and no because if this investment is opened to the public then everywhere we go would be ai and a lot of people will use it for unnecessary preposes.

 @B2KVJ7Xfrom Ontario  answered…2wks2W

Yes, but it should be studied and highly secured to ensure public safety and reduce the risk of losing control.

 @B2JKHRWfrom Pennsylvania  answered…2wks2W

AI use requires presence of subject matter experts. If Govt is planning on implementing AI they need more SME first

 @B2HKS57New Democraticfrom Quebec  answered…2wks2W

the potential is there, little by little they could incorporate the use of ai but definitely nothing dramatic.

 @B2H27FGfrom Alberta  answered…3wks3W

Only at the border. Track where crossings are happening outside of patrolled facilities and setup group, UNKNOWN - covert - that can stop it right then and there

 @B2F3YBDfrom British Columbia  answered…3wks3W

No. AI is bad for climate and should be used as little as possible, the future of AI should be in the hands of the public, for the public to decide.

 @9ZDCX9Tfrom Washington  answered…3mos3MO

Yes, But only if the AI is used for information and detecting enemy attacks, it cannot initiate attacks/counter-attacks.

 @9ZD7HCGfrom New York  answered…3mos3MO

Yes, but not to the extent of the private market bubble, and only when technology has been heavily tested and found to be reliable should it be put to use

 @9YKVWYHNew Democraticfrom British Columbia  answered…3mos3MO

AI is difficult because its a new thing. It can be used for good and bad, so Its a case-by-case basis.

 @9YDGG9Xfrom Alberta  answered…3mos3MO

I don't believe artificial intelligence is a safe creation and will eventually take over all jobs, and society as a whole

 @9WQ5HHKfrom Ontario  answered…3mos3MO

It depends, if ai is being used to check if someone is who they say they are then no, however if it is being used to keep servers protected then yes

 @9W9M54Xfrom British Columbia  answered…4mos4MO

Yes but until AI becomes more advanced it should not be the biggest priority nor should we rely on it too heavily if it does reach that level of advancement.

 @9VMC949Liberalfrom Ontario  answered…4mos4MO

AI is not that competent in that field as of now so it might not be the best now but they could start somewhere

 @9VKMP3Gfrom Ontario  answered…4mos4MO

at the current stage of AI development that would lead to more harm than good, however in the future once the technology is more developed that could be beneficial.

 @9VJLT3Zfrom Alberta  answered…4mos4MO

It is inevitable that they will do this, but they should setup an AI ethics commission for general use of AI and be very careful in their usage of AI. It is a Pandora's box.

 @9VDRGX4from British Columbia  answered…4mos4MO

no the government should work on developing the humans brain instead of creating (AI) for defense applications.

 @9TMG6DRfrom Ontario  answered…5mos5MO

I don't have anything against AI, but AI could lead to problems like identity fraud and other major problems.

 @9TKHGCVfrom British Columbia  answered…5mos5MO

Hoping we all watched Terminator, we need to implement immense security and strict control measures if we do invest in AI defense. So yes, we should invest in AI defense, but keep a strict fist clenched over it.

 @9TJX597from British Columbia  answered…5mos5MO

Yes, provided it is under constant review by a third party government agency of bipartisans overseeing this.

 @9T9Y95Zfrom Ontario  answered…5mos5MO

yes but it should be limited and the ai should have a killswitch along with it being overseen by a human.

 @9NLQQF3from New Jersey  answered…8mos8MO

 @9NGY3VKfrom Alberta  answered…8mos8MO

I'm extremely iffy on it since I've seen and read too much horror stories about AI defenses. Try reading "I Have No Mouth But I Must Scream" and answer yes with confidence, I dare you

 @9NC8GVSfrom Alberta  answered…8mos8MO

Yes, but only if we can be sure that it isn't going to be used against the people.

 @9MV4GBFfrom Ontario  answered…9mos9MO

I think they should invest in it for certain uses like enemy detection software but if it can shoot on its own without a human there, I think it should be banned

 @9MT6HBBNew Democraticfrom Ontario  answered…9mos9MO

 @9MSLPDYfrom Ontario  answered…9mos9MO

 @9MNPFD4Liberalfrom Ontario  answered…9mos9MO

depends on how serious the situation, and if the AI has been carefully tested and works properly

 @9MNG73Zfrom Ontario  answered…9mos9MO

it should be used as application with limited use not at excess still decision should depend on human intelligence and Human control

 @9MKVW9Vfrom Ontario  answered…9mos9MO

 @9MKBK8SConservativefrom Ontario  answered…9mos9MO

AI can be used for things such intelligence gathering or tactal purposes. However, any weapon systems should be human controlled.

 @9T43Q5Nfrom Ontario  answered…5mos5MO

Yes and No, using AI to enhance military capabilities should be to some extent. Giving full control could be a risk for dangerous situations.

 @9T3TX4DNew Democraticfrom Ontario  answered…5mos5MO

It really depends on how AI evolves. I get that it could (in some cases) give a second opinion if the military were making plans or decisions, however, it also takes time and knowledge to figure out the logistics of the AI generated opinion. If it is able to give proper answers and opinions with thorough explanations behind them, then maybe then the government could invest in AI.

 @9SZHZL3from Alberta  answered…5mos5MO

Yes, but to the extent that it is thoroughly monitored and used minimally with human evaluation of AI decisions

 @9SVW64Dfrom Ontario  answered…5mos5MO

Yes but again make sure it’s well controlled, understood and not entirely dependent on AI and that there is a way to overrule an ai command.

 @9SMKCNKIndependentfrom Ontario  answered…5mos5MO

Should invest in AI technologies but great vigilance and with concrete details and barriers that make sure that nothing goes out of hands.

 @9S4TP94Greenfrom Nova Scotia  answered…6mos6MO

Other militaries will still invest in this tech, no matter how abhorrent it is to think about an AI manned gun.

 @9RCHXWBfrom Nova Scotia  answered…7mos7MO

gov't should invest in regulating AI and in diplomacy initiatives around the ethical implications of AI before any tech gets applied practically.

 @9RCBSYBfrom Ontario  answered…7mos7MO

Semi-autonomous drones would be acceptable, i.e., no serious action without human approval. Cyber defense in order to provide warnings, alerts, blocking and locking out to wait for human action would be acceptable. Decision-making should be left to humans but AI can provide all relevant information so as to allow for a well informed decision. Humans should retain control in all cases.

 @9RBXRKDfrom Nova Scotia  answered…7mos7MO

Yes because other countries will and do. However there should always be human oversight to major decisions

 @9RBVWZ9from New Brunswick  answered…7mos7MO

Yes but decision making should be left to humans and all source material should be triple checked for veracity.

 @9RBT38Tfrom Nova Scotia  answered…7mos7MO

Yes, only because other governments will not hesitate to do so, we need to keep pace with the world.

 @9QSV5BHfrom California  answered…7mos7MO

Depends on the AI, and how reliable it is, either way, I think we should have real people behind the AI just in case something fishy gets through the AI's defense.

 @9QSSKF4from Ontario  answered…7mos7MO

Yes, but make sure that the AI is used as a tool and not as an excuse to hide behind your own deteriorating morals

 @9QS7YM6from Alberta  answered…7mos7MO

I think they have to be very careful when analyzing risk versus reward. I'm unsure if it's a good idea at this point.

 @9P8NRFMNew Democratic from Alberta  answered…8mos8MO

Artificial Intelligence in its current state, is not true AI as it cannot make its own decisions and reasoning in an independent or unbiased manner. Current AI models heavily rely on information which is/was produced by various human scholars- of whom are experts in their qualified field(s) . Therefore, AI is not an appropriate way to validate/officiate defence applications.

 @9NZGRJ8People’sfrom Ontario  answered…8mos8MO

It could be good. But can be used against the Government by a kid in his Moms basement.

 @9NQ42RXfrom Saskatchewan  answered…8mos8MO

no, the use of AI is not something we should popularize, especially in the case of government use and national safety

 @B2SWQFZfrom Alberta  answered…1 day1D

If there are ways to make the AI extremely safe to use, and have a low to zero chance of being hacked by foreign parties.

 @B2ST4SYfrom Ontario  answered…1 day1D

I do not believe that that the government should invest in artificial intelligence as it is not the be all end all. with AL you do not have the human being behind it it is only a cold hard computer so when it comes to a life or death situation it could very well go for the most logical rought for example a terast is hiding in a group of 100 people the choice is to wait until that person can be safely removed from the group of inacence or end the 100 lives to end the tarest and save millions later on this would be considered casualties where a human may try and find other rought to deal with this and minamise the casualties the AI may choose to end all the lives for the sack of millions of others living in peace.

 @B2SL784from Alberta  answered…1 day1D

they should do it only for specific instances but always have a human around to make sure no mistakes are made

 @ISIDEWITHasked…5mos5MO

What are your thoughts on who should be held responsible if an AI system makes a mistake that results in the loss of lives during a conflict?

 @ISIDEWITHasked…5mos5MO

How might AI in defense change the way governments and soldiers view the concept of 'sacrifice' in war, and is that a good thing?

 @ISIDEWITHasked…5mos5MO

If AI systems are used for cyber defense, do you believe they can truly keep up with human hackers or outmaneuver them?

 @ISIDEWITHasked…5mos5MO

Do you think AI could help prevent wars from happening or will it just escalate arms races between countries?

 @ISIDEWITHasked…5mos5MO

What worries you more: nations not adopting AI fast enough in defense or developing AI too quickly without enough oversight?

 @ISIDEWITHasked…5mos5MO

Could you trust a machine to defend your country, or does that responsibility need to stay with humans no matter what?

 @ISIDEWITHasked…5mos5MO

How do you think the use of AI in national defense aligns with our values around human rights and justice?

 @ISIDEWITHasked…5mos5MO

Could AI in military strategies one day reduce human casualties or will it simply lead to more advanced forms of warfare?

 @ISIDEWITHasked…5mos5MO

How do you personally feel about the idea of autonomous drones deciding whether to engage in combat without human input?

 @B2TS86Rfrom Alberta  answered…4hrs4H

yes and no the military shouldn't rely on ai and should be able to find solutions but using ai to help them if they are stuck on a problem then yes

Demographics

Loading the political themes of users that engaged with this discussion

Loading data...