icon

Categories:
product strategy, mainntenance, ideation

Main take aways:

  • So many companies make casual promises about AI.
  • It’s hard to know what’s real and what’s not.
  • With real AI, your network becomes easier to operate.

SUBSCRIBE

Error: Contact form not found.

THE FUTURE OF
AI IN OUR LIFE

icon

Jaden Abfalter

This special issue profiles how technological innovations are exerting a transformative force on the practice and academic discipline of marketing. These technologies create tremendous upside potential along with novel risks that need to be understood and effectively managed to realize the benefits and mitigate the downsides. Each of the technological advances discussed in the special issue—health IT (Agarwal et al.), robotics (Davenport et al.), chatbots (Thomaz et al.), mobile (Tong et al.), social media (Appel et al.), and in-store retail technology (Grewal et al.), is fueled in significant ways by artificial intelligence (AI).

The torrent in the development and deployment of AI systems is expanding the scale and scope with which these systems are affecting our work and everyday lives. These systems are penetrating a broad range of industries, such as education, construction, healthcare, news and entertainment, travel and hospitality, logistics, manufacturing, law enforcement, and finance. Their role is becoming much more profound in our lives by influencing what we buy, who we hire, who our friends are, what newsfeed we receive, and even how our children and elderly are cared for. They are being employed in a range of marketing applications, such as personalizing product and content recommendations and optimizing cost-per-click and cost-per-acquisition in ad targeting by mining troves of online consumer behavior data. Frontier applications are predicting individuals’ future needs and recommending actions to them. For example, Amazon’s recently launched add-on feature to its personal assistant Alexa, “Alexa Hunches”, learns individual’s rhythms in interacting with smart-home devices such as a lock or door, observes deviance from rhythms, and reminds users when to lock a door or turn off a light.

However, we typically have little understanding on why AI systems make decisions or exhibit certain behaviors. Many machine learning (ML) algorithms used to develop these systems are inscrutable, particularly deep learning neural network approaches which have emerged to be a very popular class of ML algorithms. This inscrutability can hamper users’ trust in the system, especially in contexts where the consequences are significant, and lead to the rejection of the systems. It has also obfuscated the discovery of algorithmic biases arising from flawed generated processes that are prejudicial to certain groups. Such biases have led to large-scale discrimination based on race and gender in a number of domains ranging from hiring to promotions and advertising to criminal justice to healthcare. Such biases against vulnerable populations in healthcare are discussed in Agarwal et al. in this issue, while Davenport et al. (also in this issue) provide a broader discussion of algorithmic biases.

The following definition of AI makes salient that human users need to trust AI systems in attaining objectives (Russell n.d, p. 11): “Machines are beneficial to the extent that their actions can be expected to achieve our objectives.” What should be the basis for this trust? In addition to providing users with information on the system’s prediction accuracy and other facets of performance, providing users with an effective explanation for the AI system’s behavior can enhance their trust in the system. In situations where the system makes a recommendation to the user on a product to purchase or a connection to add to their professional or social network, an explanation for the recommendation is likely to make the information more useful to the user and have a stronger influence on the user’s actions. Such explanations can also be leveraged by developers to improve the model through feature engineering, modification of the model’s architecture, and tuning of hyperparameters, and by trainers to revise the set of learning and testing data resources.

Explainable AI (XAI) is the class of systems that provide visibility into how an AI system makes decisions and predictions and executes its actions. XAI explains the rationale for the decision-making process, surfaces the strengths and weaknesses of the process, and provides a sense of how the system will behave in the future.Footnote 1 Given the extensiveness with which AI systems are being developed and deployed to upend marketing, as illustrated in the articles in the special issue—from personalizing the user experience, to recommending products and content for customers, to lead-scoring for B2B marketing teams, to automating two-way conversations with customers to nurture relationships—it becomes critical for marketing researchers to understand how to achieve explainability for different types of AI models, assess the tradeoff between prediction accuracy and explanation associated with different choices, and develop and deploy trustworthy AI systems that meet business and fairness objectives.

I briefly differentiate between inherently interpretable AI models and black-box deep learning models, overview XAI approaches to turn black-box models into glass-box models, and discuss the research implications related to leveraging XAI in marketing AI applications. Inherently interpretable models vs. black-box deep-learning models The process to generate explanations underlying the behavior of AI systems will depend on the type of ML algorithms: algorithms that generate inherently interpretable models versus deep learning algorithms that are complicated in structure and learning mechanisms and generate models that are inherently uninterpretable to human users (Hall and Gill 2019; Du et al. 2018).

Machine learning algorithms such as decision trees, Bayesian classifiers, additive models, and spare linear models generate interpretable models in that the model components (e.g., weight of a feature in a linear model, a path in a decision tree, or a specific rule) can be directly inspected to understand the model’s predictions. These algorithms use a reasonably restricted number of internal components (i.e. paths, rules, or features) but provide traceability and transparency in their decision making. As long as the model is accurate for the prediction task, these approaches provide the visibility to understand decisions made by the AI system.

In contrast, deep learning algorithms are a class of ML algorithms which sacrifice transparency and interpretability for prediction accuracy. These algorithms are now being employed to develop applications such as prediction of consumer behaviors based on high-dimensional inputs, speech recognition, image recognition, and natural language processing. As an example, convolutional neural networks, which underlie facial recognition applications, extract high-level complex abstractions of a face through a hierarchical learning process which transforms pixel-level inputs of an image to relevant facial features to connected features that abstract to the face. The model learns the features that are important by itself instead of requiring the developer to select the relevant feature. As the model involves pixel-level inputs and complex connections across layers of the network which yield highly nonlinear associations between inputs and outputs, the model is inherently uninterpretable to human users.

image

GET IN TOUCH

Error: Contact form not found.

Explainable AI (XAI) is the class of systems that provide visibility into how an AI system makes decisions and predictions and executes its actions. XAI explains the rationale for the decision-making process, surfaces the strengths and weaknesses of the process, and provides a sense of how the system will behave in the future.Footnote 1 Given the extensiveness with which AI systems are being developed and deployed to upend marketing, as illustrated in the articles in the special issue—from personalizing the user experience, to recommending products and content for customers, to lead-scoring for B2B marketing teams, to automating two-way conversations with customers to nurture relationships—it becomes critical for marketing researchers to understand how to achieve explainability for different types of AI models, assess the tradeoff between prediction accuracy and explanation associated with different choices, and develop and deploy trustworthy AI systems that meet business and fairness objectives.

I briefly differentiate between inherently interpretable AI models and black-box deep learning models, overview XAI approaches to turn black-box models into glass-box models, and discuss the research implications related to leveraging XAI in marketing AI applications. Inherently interpretable models vs. black-box deep-learning models The process to generate explanations underlying the behavior of AI systems will depend on the type of ML algorithms: algorithms that generate inherently interpretable models versus deep learning algorithms that are complicated in structure and learning mechanisms and generate models that are inherently uninterpretable to human users (Hall and Gill 2019; Du et al. 2018).

Machine learning algorithms such as decision trees, Bayesian classifiers, additive models, and spare linear models generate interpretable models in that the model components (e.g., weight of a feature in a linear model, a path in a decision tree, or a specific rule) can be directly inspected to understand the model’s predictions. These algorithms use a reasonably restricted number of internal components (i.e. paths, rules, or features) but provide traceability and transparency in their decision making. As long as the model is accurate for the prediction task, these approaches provide the visibility to understand decisions made by the AI system.

In contrast, deep learning algorithms are a class of ML algorithms which sacrifice transparency and interpretability for prediction accuracy. These algorithms are now being employed to develop applications such as prediction of consumer behaviors based on high-dimensional inputs, speech recognition, image recognition, and natural language processing. As an example, convolutional neural networks, which underlie facial recognition applications, extract high-level complex abstractions of a face through a hierarchical learning process which transforms pixel-level inputs of an image to relevant facial features to connected features that abstract to the face. The model learns the features that are important by itself instead of requiring the developer to select the relevant feature. As the model involves pixel-level inputs and complex connections across layers of the network which yield highly nonlinear associations between inputs and outputs, the model is inherently uninterpretable to human users.

Might Also Interest You:

icon

The Way Social Media Can influence Your Buttom Line

by Jaden Abfaltar

icon

General description text. We came here to try ...

icon

The Way Social Media Can influence Your Buttom Line

by Jaden Abfaltar

icon

General description text. We came here to try ...

icon

The Way Social Media Can influence Your Buttom Line

by Jaden Abfaltar

icon

General description text. We came here to try ...

GET IN TOUCH

    .