Bring Intelligence to Where Critical Transactions Run – An Update from Machine Learning for z/OS

By Kewei Wei posted Jun 07, 2018 06:12 AM


Bring Intelligence to Where Critical Transactions Run – An Update from Machine Learning for z/OS

Kewei Wei(魏可伟), Senior Technical Staff Member, Lead Architect of Machine Learning for z/OS, IBM China Development Lab;
Guanjun Cai, Information architect and developer for IBM Machine Learning for z/OS and Db2 for z/OS.


Building the foundation of machine learning

Machine learning and AI are reshaping the industries. Gartner predicts that the global enterprise value derived from AI will total $1.2 trillion in 2018, which is a 70% increase from 2017. By 2022, the number will reach $3.9 trillion [1].

From IBM Watson to Google Alpha Go, enterprises have made great strides in AI research in the last couple of years. While almost all executives believe AI is the key driver of growth and success [2], they are still in the early stage of applying the technology to their businesses. To accelerate the application, Gartner recommends that, instead of general AI that enables systems to think and act as human, enterprises should embark on their new journey with narrow AI, focusing on highly scoped machine learning solutions to address specific problems [3].

The first step of this journey is to establish a solid foundation in machine learning with some essential building blocks.

Data infrastructure

Data infrastructure is essential to every stage of the machine learning workflow, including data collection, data analysis, and model development. Machine learning requires an underlying structure that provides data scientists readily access to the most current data with easy discovery, accurate analysis, and proper governance. Unfortunately, more than half of the organizations in a recent Gartner survey indicate that they are still gathering information to define their AI strategy, a seemingly long way from building a comprehensive underlying data structure. Without the much-needed infrastructure, data scientists are forced to spend most of their time discovering and gathering data, instead of analyzing it and turn it into actionable business intelligence [4]

Machine learning workflow and project collaboration

Machine learning in an enterprise should be understood and implemented like a team sport. The success of a machine learning project should be the outcome of a concerted effort across the entire enterprise and close collaboration among business analysts, data engineers, data scientists, machine learning engineers, and application developers.

To support such a broadscale collaboration, your machine learning platform must have a well-defined machine learning workflow, from data ingestion and model training to model deployment, as shown below.



Moreover, the platform should be enabled with a feedback loop so that a model can improve itself over time with constant real-time feedback. This full workflow is the key difference between machine learning and the traditional approach to model building and training.

Machine learning as a service

Machine learning as a service (MLaaS) is a set of services that offer tools for every stage of a machine learning workflow, including data ingestion, data virtualization, data wrangling, model training, and prediction. MLaaS enables an enterprise to benefit from machine learning without bearing the cognate cost, time, and risk of putting in place a fully-fledged inhouse machine learning team [5]

MLaaS deployment doesn’t have to be on public cloud. A private cloud or on-premise MLaaS solution behind the firewall can pack and deliver tremendous value of machine learning to an enterprise and its clients.

Machine learning governance

The need for machine learning governance grows with the rapid increase of machine learning adoption. An enterprise must create a solid machine learning governance framework and integrate governance features, such as model governance and audit trace, into every stage of a machine learning workflow. Effective model governance helps prevent bias in models; it helps data scientists understand how a model works prior to the deployment and explain how the model performed after the deployment. The goal of machine learning governance is to ensure the entire lifecycle of a machine learning asset is traceable and auditable.

Why machine learning on Z?

Data gravity

In the simplest terms, machine learning is a science that enables systems to act by learning from data. So, it’s not a stretch to say that data quality is key to the success of machine learning. In the absence of quality data, data scientists on average spend 80% of their total time on gathering, cleansing, and managing data. They spend up to 20% of that time on collecting data alone [6].

Many of the data quality problems are introduced when data is in motion, for example, when it is moved off a platform for analytic processing. These problems can be avoided when analytics, such as machine learning, resources are moved to where the data resides. This approach is known as “data gravity” [7]

IBM values and invests heavily on data gravity because it recognizes the tremendous benefits that data gravity can bring to customers, including reduced cost, shortened time to value, and minimized security exposures. IBM Machine Learning for z/OS and IBM Open Data Analytics for z/OS (IzODA) are among the offerings that decidedly bring machine learning capabilities to Z, the platform where most of enterprise data resides. With these machine learning solutions, data scientists can easily explore the most current and trustable data on Z in a secured matter, without having to move it off the platform.

These solutions are not limited to the processing of data on Z. You have the options to apply the solutions to data on any platform where model training takes place.

Resiliency, availability, and reliability for operation analytics

As machine learning changes the way that enterprises operate their business, predictive analytics - using machine learning models to predict future - becomes an important part of many online transactions. For example, money transfer scams that target senior citizens happen in many countries. The longer it takes to detect the risk, the less likely the victims will be able to recover their losses.

Enterprises are in the pursuit of the ability to predict the risk before it even arises. They start to define service agreements at the same level as the operational process. As a result, the same level of resiliency, availability, and reliability is required for the underlying machine learning service and technology. This is exactly where the strength of System Z lies, especially when the data that is needed to make predictions is originated on Z.

Industry-leading security

Security and compliance are two of the biggest concerns of many businesses because standards for data security are increasingly high and regulations for compliance are progressively strict. General Data Protection Regulation or GDPR, for example, is the latest EU standard for data security and raise the bar for compliance even higher. System Z is the most secure system in the world. The latest Z14 continues to enhance the robust system with pervasive encryption. Running analytics with data on System Z is the safest choice to meet the security requirement.

But the same thing cannot be said about the security requirement and regulation compliance on distributed platforms. Every time a new copy of data is created and offloaded to a distributed platform, the risk of information leakage and other abuses are increased.

Machine Learning for z/OS and IzODA

The open source communities have steadfastly contributed a great deal to the development and advent of machine learning capabilities. Many state-of-the-art innovations are from the communities, and many data scientists learn about machine learning and craft their skills through open source projects. As a result, enterprises embrace open source technologies and solutions when they start to build their internal machine learning foundations or grow their external machine learning portfolios.

A good example of the open source contributions to machine learning is scikit-learn, the most popular machine learning framework made available by open source communities and integrated into enterprise machine learning solutions. Many data scientists begin their machine learning in python with scikit-learn and contribute back when they are able to modify and extend the framework. As of May 2018, the scikit-learn code base has seen 22,772 commits from 1,092 contributors on GitHub. Scikit-learn is indeed a framework for the communities and by the communities.

Apache Spark is another example of well-adopted open source software that is now widely integrated into a variety of enterprise solutions. To simply put, Apache Spark is the engine of choice for large-scale data processing and in-memory computing by thousands of enterprise users.

In addition, open source solutions provide data scientists a range of options for implementing each stage of the machine learning workflow. For example, data scientists can choose Spark or Pandas data frame for data processing or matplotlib or seaborn for data visualization.

IBM recognizes the immense contributions to machine learning by the open source communities and brings critical open source machine learning technologies to the Z space through the offerings of Machine Learning for z/OS and IzODA. Along with Spark, IzODA supports Anaconda distributions on z/OS, which consists of hundreds of python libraries from open source communities. These libraries include but are not limited to scikit-learn, numpy, scipy, pandas, matplotlib, and seaborn [8]

Besides embracing open source, IzODA affords data scientists the much-coveted access to data on Z. The optimized data integration layer, an IzODA component, provides data scientists secured access to almost all types of Z data sets, including Db2, IMS, VSAM, SMF, IDAA, through a unified SQL interface.

If IzODA is the engine of machine learning on z/OS, Machine Learning for z/OS is the vehicle that is powered by IzODA and charges full speed ahead, enabling enterprises maximize the value of their data on Z. As an end-to-end enterprise machine learning platform, Machine Learning for z/OS helps you create, train, and deploy machine learning models to extract value from your mission critical transactions on Z. It simplifies and automates the machine learning workflow and enables close collaboration across disciplines. By keeping the data on your Z systems, Machine Learning for z/OS helps significantly reduce the cost, security risk, and time to create, evaluate, and deploy machine learning models.

Machine Learning for z/OS provides the following functionality to help you maximize the value from your enterprise data:

  • Ingestion of data from different sources, including Db2, IMS, VSAM, IDAA, and System Management Facilities (SMF).
  • Transformation and cleansing of data to be used as the algorithm input
  • Training of a model for the selected algorithm with the prepared data
  • Evaluation, deployment, and management of the trained model
  • Integration of the MLeap engine for fast online scoring
  • Import and deployment of Predictive Model Markup Language (PMML) models
  • A RESTful API for an application to embed the predictions that are made by the model
  • Monitoring of the status, accuracy, and resource consumption of the model
  • Automated feedback loop to feed new data for re-training of the model if the accuracy deteriorates
  • An integrated notebook interface for data scientists to use Machine Learning APIs for interactive processing
  • A web administration dashboard for managing Jupyter kernels and Kubernetes services, nodes, and pods
  • Security control through integration of authentication and authorization on Z Systems

Updates to Machine Learning for z/OS

IBM Machine Learning for z/OS 1.1.0 was generally available in March 2017. New and enhanced features were continuously delivered through five minor releases in the ensuing months. May 2018 marked the release of Machine Learning for z/OS 1.2.0 that introduced new important features to address the needs of data scientists, application architects, application developers, and machine learning engineers.

The updates to Machine Learning for z/OS in version 1.2.0 include but are not limited to the following new and changed features:

  • Integration of the SPSS Modeler for developing accurate models through a powerful, intuitive, graphical interface.
  • Lifecycle management of PMML models that are trained in SPSS Modeler and RStudio.
  • Interoperability of models between Machine Learning for z/OS and Data Science Experience Local.
  • Availability of APIs for implementing Machine learning as a service for your enterprise.
  • Audit trace capability for model governance.
  • Performance enhancements in scoring engines, especially the scoring engine that runs in a CICS region.
  • Security enhancements through the implementation of IzODA client authentication, user authentication with Knox and LDAP, and GDPR compliance requirements.
  • Enhancements in scoring service management with z/OSMF workflow service and REST APIs.
  • Enhancements in project and asset management and user collaboration.

Here are a few highlights of those updates:

SPSS Modeler, a new option for visual coder

Like application developers who prefer certain programming languages over others, data scientists have their preferences on machine learning frameworks and development tools. Machine Learning for z/OS provides a range of options to meet their need.

Let’s take programming language for machine learning as an example. As part of IzODA, scala and python are now available on z/OS. Data scientists who prefer Spark can choose scala or python 3.6 to explore data and train Spark ML models in Jupyter Notebooks. As mentioned earlier, IzODA also makes Anaconda distributions available on z/OS. Data scientists who know Python can also use numpy, pandas, matplotlib, seaborn, scikit-learn, and any Python library they like in Jupyter Notebooks. Last but not least, Machine Learning for z/OS integrates RStudio for data scientists who prefer R.

With the increasing need for machine learning, many “citizen data scientists” start to construct machine learning projects as their part-time job, no longer as just a hobby. Machine Learning for z/OS provides visual coders to make it easier for them to prototype a baseline model quickly. The visual coders guide them step-by-step through a web interface or enable them to drag-and-drop objects on a canvas to build machine learning models.

Machine Learning for z/OS 1.1.0 features a Visual Model Builder wizard for data scientists to create Spark ML models without writing a single line of code. Version 1.2.0 introduces an integrated SPSS Modeler that comes with an intuitive workbench, as shown blow, and an abundance of machine learning algorithms.



With the integrated SPSS Modeler, data scientists can use the powerful interactive virtualization techniques to explore the data in Db2 for z/OS and the drag-and-drop workflow and advanced algorithms to create models.

Continuous machine learning system as a service

Almost all machine learning vendors offer machine learning as a service. They provide graphical UIs and sets of APIs for data scientists to create machine learning models. They also offer services to create a real-time scoring endpoint or enable batch scoring.

However, a complete machine learning workflow doesn’t, and shouldn’t, end with the deployment of a model. One of the most important challenges in machine learning is to sustain and improve the performance of the model over time after the deployment. That is where Machine Learning for z/OS differentiates itself from other vendors with the capabilities of feedback loop and model retaining based on real-time data. These capabilities empower enterprises with a continuous learning system of a machine learning model.

As illustrated below, Machine Learning for z/OS 1.2.0 provide more advanced services to help manage the lifecycle of a machine learning model.



  1. When a model is created in Machine Learning for z/OS, data scientists can choose to enable the auto-retraining option. When the performance metrics of a model cannot satisfy the predefined threshold, the model will be retrained with the feedback data
  2. If the model retrained from the feedback data is better than the existing one, a new version of the model is created and published to Machine Learning for z/OS repository.
  3. The DevOps process checks whether a new version is created but has not been deployed through a REST call to Machine Learning for z/OS repository service.
  4. If the REST call detects a new version and determines that the version can be automatically deployed to a testing environment, a regression test is triggered.
  5. If the new version passes the regression test, the version will be deployed to the production environment replacing the existing version.

Model governance and security

As discussed earlier, data security remains the concern of many enterprises. Machine Learning for z/OS makes data accessible to data scientists while keeping the access protected and governed.

Access in Machine Learning for z/OS is controlled through roles. Roles are clearly defined for data scientists, application developers, and administrators respectively and privileges for each role are strictly enforced.

Machine Learning for z/OS is also enabled with audit trace, tracking all activities of creating, updating, and deleting machine learning models and deployments.

Model training anywhere, model deployment anywhere

Machine Learning for z/OS opens the opportunity for data scientists to generate real-time insights out of data on Z but does not confine machine learning to Z. Instead, it allows data gravity to weigh on and dictate where machine learning and other data analytics should take place. If data is on Z, machine learning runs on Z. On the other hand, if data originates or is moved to a distributed platform, machine learning should happen on that distributed platform.

Machine Learning for z/OS is designed and developed with data gravity at its core. As a result, in addition to processing data with IzODA, Machine Learning for z/OS has a built-in Spark cluster for training a model with small or medium size data set. It also provides the option of running some machine learning services on a remote Spark cluster.

Interoperability of models in IBM machine learning offerings

Machine Learning for z/OS represents just one of IBM offerings that delivers machine learning capabilities on Z and other platforms. Two other machine learning solutions include Watson Studio for Public Cloud and Data Science Experience Local for distributed systems and private cloud. All three solutions provide similar functions that promise to deliver the same user experience. As shown below, IBM’s vision is that machine learning models are interoperable in these solutions; that is, you can create models in one solution and then seamlessly deploy those models in other solutions.


For example, you can now easily import Spark ML and PMML models from DSX-Local and deploy them in in Machine Learning for z/OS 1.2.0.



Machine learning is becoming, and will be, the engine of growth in IT and other industries in the decade to come. System Z, as the most secure and reliable platform for enterprise, is embracing the trend. With data gravity at its core, IBM Machine Learning for z/OS and IBM Open Data Analytics bring the power of machine learning to Z, by leveraging the combined strength of IBM proprietary industry-leading technologies and popular open source contributions. Together, these solutions offer an end-to-end machine learning platform that allow business analysts, data scientists, data engineers, and application developers to collaborate and turn their mission critical transaction data into actionable business intelligence.



1 Charlie Osborne, “Artificial intelligence will be worth $1.2 trillion to the enterprise in 2018”,

2 Christy Pettey, “Gartner Survey of More Than 3,000 CIOs Confirms the Changing Role of the Chief Information Officer”,

3 Kasey Panetta, “Gartner Top 10 Strategic Technology Trends for 2018”,

4“The most difficult thing in data science: politics”.

5 Jeevan Biswas, “What Is Machine Learning as A Service (MLaaS)?”,

6 Gil Press, “Cleaning Big Data: Most Time-Consuming, Least Enjoyable Data Science Task, Survey Says”,

7 Dave McCrory, “Data Gravity – in the Clouds”,

8 Marianne Eggett, “IBM Open Data Analytics for z/OS for Mainframe Data Access – An Evolution of Mainframe Apache Spark”,