Drones will soon decide who to kill

Discussion in 'Industry News' started by Calliers, Apr 16, 2018.

  1. Calliers

    Calliers HH's MC Staff Member

    Joined:
    Oct 12, 2004
    Messages:
    37,711
    Likes Received:
    2,396
    Trophy Points:
    139
    The US Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence (AI). This is a big step forward. Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement.

    Once complete, these drones will represent the ultimate militarization of AI and trigger vast legal and ethical implications for wider society. There is a chance that warfare will move from fighting to extermination, losing any semblance of humanity in the process. At the same time, it could widen the sphere of warfare so that the companies, engineers and scientists building AI become valid military targets.

    Existing lethal military drones like the MQ-9 Reaper are carefully controlled and piloted via satellite. If a pilot drops a bomb or fires a missile, a human sensor operator actively guides it onto the chosen target using a laser.
    ____________________
    Source: thenextweb
     
  2. Trusteft

    Trusteft HH's Asteroids' Dominator

    Joined:
    Nov 2, 2004
    Messages:
    20,531
    Likes Received:
    1,773
    Trophy Points:
    153
    I see nothing ever going wrong with this. A wise decision.
     
    Calliers likes this.
  3. Judas

    Judas Obvious Closet Brony Pony

    Joined:
    May 13, 2002
    Messages:
    37,982
    Likes Received:
    793
    Trophy Points:
    138
    Yes, i support this... as well as the replacement of the individuals that made this decision, with AI as well... it is a revolutionary idea.
     
    Trusteft likes this.
  4. IvanV

    IvanV HH Assassin Guild Member

    Joined:
    Dec 18, 2004
    Messages:
    9,945
    Likes Received:
    1,298
    Trophy Points:
    123
    It's logical. I'm not saying I'm in favour, but it's logical. I'm actually more scared of this in a civilian setting (in a military one, plenty goes wrong with the human with it's finger on the trigger anyway). Imagine automated riot cops in a dictatorship. No emotion, no compassion, just superior armour and firepower (it doesn't necessarily have to be lethal force).
     
  5. Judas

    Judas Obvious Closet Brony Pony

    Joined:
    May 13, 2002
    Messages:
    37,982
    Likes Received:
    793
    Trophy Points:
    138
    i'm not denying it'll be logical.... however the variables in which/how a targets are automatically chosen are created by humans at the moment and there is bound to be bias or "growing" pains so to speak... the question is, how many people are likely to be incorrectly targeted due to human error or bias or tampering for that matter.

    In time i'm more likely to trust a proper AI to make ideal choices or at least present the ideal options as it'll be able to calculate and present an answer with as few if any bias or agenda to deal with potentially... but we're far from that yet.
     

Share This Page

visited