This considers the use of AI algorithms to assist in making decisions such as sentencing, parole, and law enforcement. Proponents argue that it can improve efficiency and reduce human biases. Opponents argue that it may perpetuate existing biases and lacks accountability.
Narrow down the conversation to these participants:
Province/Territory:
@B46V7MRLibertarian1wk1W
On multiple occasions, when exposed to large enough data sets, large language AI models have developed data-driven prejudices against groups who have higher propensities for criminality, low time preference behaviour, subversion, communism, violence, etc.
Invariably, the superior processing ability of AI creates the patterns that many progressive and similarly morally relativistic people would call "racist" or "prejudiced" which would present a political obstacle to this becoming a reality.
It's entirely possible that our courts could produce better outcomes simply… Read more
@9TLDMJL7mos7MO
No, AI does not understand emotions and consequences. It would not properly be able to assess the weight of these decisions to the community.
@9MKYHZ711mos11MO
Yes, but only for minor infractions that would otherwise tie up the courts time for more major cases
@B46GZPV1wk1W
No it should not make decisions. However on a case by case basis streamlining research comparing cases AI should be used
@B3286XS2mos2MO
If a human makes a mistake, the human has to take the responsibility. If an AI makes a mistake, who will take the responsibility?
@B2VJ8VZIndependent2mos2MO
it should not be used to make decisions about people but it should be used to help facilitate the decisions about people.
@B2SWR5QConservative2mos2MO
No, because the way AI works is by prediction models, justice shouldn't be applied for future actions, only past actions.
@B29KW773mos3MO
AI should be looked at as a possible voice alongside the judge and jury. Not to make descisions but as a voice in choir used to help make informed descisions.
@9ZNZZY84mos4MO
Yes, but it should not be the main source of judicial decisions. Rather, it should be used to ensure consistent and uniform results based on case law and common law precedence.
@9ZNXMHS4mos4MO
Yes, assuming massive human oversight and the AI having to explain each part of its thought process and how it reached a conclusion.
@9YWJYJ65mos5MO
Yes, but the accuracy of AI should be improved beforehand.
@9YMQ7FQ5mos5MO
Both AI and a human should make two separate decisions then compare.
@9VLZQZ26mos6MO
Should be used to help decide but not ultimately decide.
@9VDFZ6FConservative6mos6MO
No, a human with feelings and emotions should be able to make decisions that drastically affect lives
No, AI is based on Human intellect and therfore is Progamed to be biased based on the human programer
@9V3BFHY6mos6MO
Abolish ai on a global scale because it takes over others jobs
@9TVPNXW6mos6MO
as long as the defendant and prosecution is okay with it
@9SLJBJV7mos7MO
Yes…it would take the human error factor/emotion out of sentencing and provide more consistency. Criminal justice currently favours the criminals rights and sentencing needs to consider and benefit the victims. It needs to be more than just a “legal system”
@9RRJ6D78mos8MO
Yes, but only if the technology is proven to have extremely high accuracy (99%+)
@9RPPNG78mos8MO
Yes, but it ought to be used to supplement the findings of the courts and juries.
@9RCXLPV8mos8MO
It should be used to detect evidence that was tampered with by ai.
@9RC3SBW8mos8MO
Not now since AI has been proven to invent case law, perhaps in the future but only as an adjunct.
@9RBY87R8mos8MO
No, AI could make suggestions but decisions should be made my humans
@9RBNFB78mos8MO
No, it will be assumed objective even though it will maintain our biases.
@Prosperitarian 9mos9MO
Yes, but only to offer solutions to decisions, not make them
@9P8NRFMNew Democratic 10mos10MO
Artificial Intelligence in its current state, is not true AI as it cannot make its own decisions and reasoning in an independent or unbiased manner. Current AI models heavily rely on information which is/was produced by various human scholars- of whom are experts in their qualified field(s) . Therefore, AI is not an appropriate overseer within legal/criminal fields.
@9NHHZDS 10mos10MO
AI can be used as a tool to help assist in making decisions, however, should not be used as the sole decision maker.
Loading the political themes of users that engaged with this discussion
Loading data...
Join in on more popular conversations.