+

Toggle voterbase

Statistics are shown for this demographic

Answer Overview

Response rates from 3.9k Alberta voters.

48%
Yes
52%
No
48%
Yes
52%
No

Historical Support

Trend of support over time for each answer from 3.9k Alberta voters.

Loading data...

Loading chart... 

Historical Importance

Trend of how important this issue is for 3.9k Alberta voters.

Loading data...

Loading chart... 

Other Popular Answers

Unique answers from Alberta voters whose views went beyond the provided options.

 @B2VNWSKfrom Ontario  answered…1wk1W

Under supervision from a committee of experts for the ethics, implications, biases and for safety of the people in mind.

 @B2V74TCfrom Alberta  answered…1wk1W

Taking away the face-to-face component of fights leads to diminished value for human life. Using AI to bring supplies and go on suicide missions, or to do surveillance, would not be a bad idea, however.

 @B2TS86Rfrom Alberta  answered…2wks2W

yes and no the military shouldn't rely on ai and should be able to find solutions but using ai to help them if they are stuck on a problem then yes

 @B2SWQFZfrom Alberta  answered…2wks2W

If there are ways to make the AI extremely safe to use, and have a low to zero chance of being hacked by foreign parties.

 @B2ST4SYfrom Ontario  answered…2wks2W

I do not believe that that the government should invest in artificial intelligence as it is not the be all end all. with AL you do not have the human being behind it it is only a cold hard computer so when it comes to a life or death situation it could very well go for the most logical rought for example a terast is hiding in a group of 100 people the choice is to wait until that person can be safely removed from the group of inacence or end the 100 lives to end the tarest and save millions later on this would be considered casualties where a human may try and find other rought to deal with this and minamise the casualties the AI may choose to end all the lives for the sack of millions of others living in peace.

 @B2SL784from Alberta  answered…2wks2W

they should do it only for specific instances but always have a human around to make sure no mistakes are made

 @B2S3FP5from Alberta  answered…2wks2W

yes, but only for prediction on if things may be coming, AI should not have control over any weapons

 @B2RD6TTfrom Alberta  answered…2wks2W

yes, but the access it has should be restricted to suggestions. AI should not have the ability to send nuclear missiles in the case of a misidentification.