AI in defense refers to the use of artificial intelligence technologies to enhance military capabilities, such as autonomous drones, cyber defense, and strategic decision-making. Proponents argue that AI can significantly enhance military effectiveness, provide strategic advantages, and improve national security. Opponents argue that AI poses ethical risks, potential loss of human control, and can lead to unintended consequences in critical situations.
Statistics are shown for this demographic
Province/Territory
Response rates from 3.9k Alberta voters.
48% Yes |
52% No |
48% Yes |
52% No |
Trend of support over time for each answer from 3.9k Alberta voters.
Loading data...
Loading chart...
Trend of how important this issue is for 3.9k Alberta voters.
Loading data...
Loading chart...
Unique answers from Alberta voters whose views went beyond the provided options.
@B2VNWSK1wk1W
Under supervision from a committee of experts for the ethics, implications, biases and for safety of the people in mind.
@B2V74TC1wk1W
Taking away the face-to-face component of fights leads to diminished value for human life. Using AI to bring supplies and go on suicide missions, or to do surveillance, would not be a bad idea, however.
@B2TS86R2wks2W
yes and no the military shouldn't rely on ai and should be able to find solutions but using ai to help them if they are stuck on a problem then yes
@B2SWQFZ2wks2W
If there are ways to make the AI extremely safe to use, and have a low to zero chance of being hacked by foreign parties.
@B2ST4SY2wks2W
I do not believe that that the government should invest in artificial intelligence as it is not the be all end all. with AL you do not have the human being behind it it is only a cold hard computer so when it comes to a life or death situation it could very well go for the most logical rought for example a terast is hiding in a group of 100 people the choice is to wait until that person can be safely removed from the group of inacence or end the 100 lives to end the tarest and save millions later on this would be considered casualties where a human may try and find other rought to deal with this and minamise the casualties the AI may choose to end all the lives for the sack of millions of others living in peace.
@B2SL7842wks2W
they should do it only for specific instances but always have a human around to make sure no mistakes are made
@B2S3FP52wks2W
yes, but only for prediction on if things may be coming, AI should not have control over any weapons
@B2RD6TT2wks2W
yes, but the access it has should be restricted to suggestions. AI should not have the ability to send nuclear missiles in the case of a misidentification.
Join in on the most popular conversations.