The Limits of AI

Very interesting article!

A US Air Force F-16 pilot just battled AI in 5 simulated dogfights, and the machine emerged victorious every time

I know Air Battle Managers (USAF AFSC 13B3X) are just dying to be the tip of the spear…

The aviator part of me says we’ll always need a pilot up there, somewhere, if for no other reason than knowing when to employ and when NOT to employ AI-enabled UCAVs. Sometimes, there’s just no substitute for a human brain connected to a Mark I Eyeball. Not only will the enemy be able to develop ways of confusing AI, but bad actors will be able to escalate situations when no escalation is desirable.

The IT part of me says a 20-G UCAV will provide some incredible capabilities with respect to maintaining air superiority and control of an airspace. If you need to deny an area of ALL traffic, send ’em up. Looks like they’re ready. Just include three, ultrasecure, independent means of telling them to stand down. You don’t want them to shoot down an airliner deviating 50 miles due to a line of thunderstorms.

Why three? Because even fail-safes can fail.

Why ultrasecure? Because you never want to allow the enemy even a remote opportunity of either gaining access or denying your own access to a functional capability, particularly one which can limit an operational capability.

Why independent? Because you never want to allow a mistake made by one individual, or even in cahoots with a second individual, or an error in a single IT system or power supply, to alter operational capability, or worse, create a threat to your own forces or civilians.

As interesting as the outcome of this exercise may be, real wars — even routine operations under semi-peaceful conditions, such as a truce or the stalemate the exists with North Korea — can be infinitely more complex.

A well-trained and experienced human can reprogram itself rather rapidly and with predictable outcomes when encountering unforeseen circumstances.

To date, AI cannot.

I’ll give you a real-world example.

While flying in Iraq, one day we came beak to beak with another C-130. It was flying at our same altitude, along the same course, but in the opposite direction. It didn’t matter whose fault it was. The point is, it happened when it wasn’t supposed to happen. Thankfully, it was daylight and we saw one another with enough time to avoid a collision. Mistakes happen. Human brains deal with unanticipated situations quite well, but AI generally doesn’t.

The same goes for poorly conceived orders: “On 8 January 2020, the Boeing 737-800 operating the route was shot down shortly after takeoff from Tehran Imam Khomeini International Airport by the Iranian Islamic Revolutionary Guards Corp. All 176 passengers and crew died.” – Source

When humans make mistakes, either those same humans or other humans can work to mitigate potential damage. Yes, we’re prone to error, but we’re also incredibly adaptable to unforeseen circumstances. AI is subject to errors in programming, testing, and employment, and isn’t adaptable (yet) for thinking outside the box.

Would AI overcome its own programming to fly right up the backside of another fighter hit by enemy fire and losing fuel and push it away from enemy territory against its tailhook so that its crew could eject over less hostile territory?

True story: Pardo’s Push.

Leave a Reply