As technology evolves, artificial intelligence is becoming increasingly mainstream, and it will inevitably start to impact the way we interact. On the one hand, AI has the potential to solve a variety of problems and streamline our lives and our work. But will this come at the cost of the all-important human touch? A foreshadowing statistic claims that by 2020, 85% of client interactions will be managed without a human. But can there really be an algorithm for building relationships?
Are we at risk of losing the human touch?
Let's use our industry as an example. Recruitment is a people business. It always has been. It's all about building and maintaining relationships with clients and candidates. Caring for clients and understanding not just what they do, but what motivates them and what pressures they're experiencing are vital of managing these relationships.
For example, if interviewing a candidate that's recently been made redundant, jumping in with interview prep questions for the next role isn't the right approach. Taking an empathic approach and understanding with the candidate's personal situation about what they've been through is a fundamental part of building a lasting relationships and also building their confidence.
We are seeing the rise of AI and chatbots in their current form, but are they capable of reading body language, tone of voice and judging the mood of a person. There are some things that are fundamentally intuitive for people that AI cannot replicate.
Just because we have it, should we use it?
It's vital that AI behaves in ways that are aligned with human interests. The effects of such a system will depend upon whether it is pursuing human-approved goals. If not, an advanced AI system may find unintended, unintuitive ways to achieve the specific goals it is tasked with.
A perfect example of this happened just recently. Two of Facebook's AI bots started talking to each other and developed a language all of their own. The two bots were supposed to be learning to trade balls, hats and books, assigning value to the objects then bartering them between each other. But instead of trading in English, they taught each other a language the human designers never intended.
If a Facebook chatbot gets out of hand, technicians simply shut it down and fix it. But more advanced AI could prove to be more difficult, if not impossible to fix. A system could acquire new hardware, alter its software, or take other actions that would leave the original programmers with no more control over it. And since most programmed goals are better achieved if the system stays operational and continues pursuing its goals than if it is deactivated or its goals are changed, systems will naturally resist shutdown.
While this may sound like sci-fi, it is a logical path for the future of AI and that's exactly the problem. Logic can only take you so far within the context of a human world because we are so more than just complex logic engines. AI is going to become more a part of our lives whether we like it or not. But as we engage more with artificial intelligence, we must not lose sight of why it's important to remain human.