Intelligence, Inside and Outside.

Combining AI With A Trusted Data Approach On IBM Power To Fuel Business Outcomes

Data fuels artificial intelligence (AI), and the infrastructure you run AI on is essential. For business leaders exploring the many advantages that AI promises, they must first ask: “How can my team build, train and deploy AI systems in my enterprise? And do we have the right infrastructure that can support both the compute-intensive and the memory-intensive requirements of AI workloads?”

They will quickly find it’s not a one-size-fits-all approach for AI infrastructure and instead must align the right infrastructure to the right AI task at hand. Considerations not only include AI tasks, size and scale of models, but also security, privacy, resiliency and regulatory compliance and policies. AI workloads will increasingly form the backbone of mission-critical workloads and thus will require resilient infrastructure by design.

With the recent launch of IBM watsonx—IBM’s AI and data platform built for enterprises—we are helping clients create competitive advantage by providing tools—like a planned SAP SDK for watsonx, available in Q1 of 2024—across the entire AI lifecycle that can scale AI across their organizations.

IBM Power is designed for AI and advanced workloads, allowing enterprises to inference and deploy AI algorithms on their most sensitive data and transactions that reside on Power systems. For example, clients can leverage AI acceleration that comes with every Power10 core to process up to 42% more batch queries per second on IBM Power S1022 than compared x86 server during peak load of 40 concurrent users, when each is using large language AI models. [1] Moreover, Power10’s acceleration allows clients to inference large language models in less than one second on IBM Power S1024 servers with 2×12 cores. [2]

Impactful, AI-driven capabilities

Whether clients need to integrate their data into data fabrics and AI platforms or deploy AI models like generative AI close to their data, IBM Power can help enterprises address concerns around time-to-market for AI-driven solutions with a fit-for-purpose approach.

For many of our clients, that fit-for-purpose approach includes SAP HANA on IBM Power for record-breaking performance, affordable scaling and increased uptime. [3] We plan to make available SAP ABAP SDK for watsonx, which is intended to make it easier to consume watsonx services within SAP ABAP environments running on-premises, in the cloud and on top of the standalone ABAP engine in SAP’s Business Technology Platform. The SDK is also intended to allow RISE with SAP clients to create extensions in SAP ABAP environments. And for our clients who typically run SAP HANA workloads on IBM Power, they can inference near their data on Power systems and through the SDK, leverage watsonx in ABAP environments.

Read More  Internews, Microsoft, USAID to develop Media Viability Accelerator

To give you a possible use case of this in action, clients could harness the power of watsonx to simplify analyzing preventative operational parameters derived from SAP systems. This could help identify patterns that indicate potential asset failures and establish context around the health of devices. Leveraging predictive capabilities, a system can proactively anticipate and predict failures, empowering organizations to take preemptive measures to help prevent costly downtime and disruptions.

Our clients can achieve impactful business outcomes because IBM Power10, with on-chip acceleration and large memory, provides a scalable and secured platform to embed AI into clients’ transaction workflows and end-customer experiences. We’ve optimized IBM Power for the most common AI libraries made available by IBM’s AI/ML partner Rocket Software via RocketCE, which will continue to support AI applications to capitalize on Power10’s innovation.

I’m excited to share that we intend to expand our portfolio with Rocket Software in Q4 of 2023 with the addition of Rocket AI Hub for IBM Power, an integrated set of open-source AI platform tools like Kubeflow. Rocket AI Hub for IBM Power will be available and have a commercial support add-on option, as well.

Today, we are also announcing the availability of IBM Power10 in IBM Power Virtual Server, starting in select data centers in the United States and expanding into additional geographies later this year. This will continue to expand fleet compute capabilities and offer additional choice when deploying business-critical workloads on IBM Cloud. Clients with demanding performance requirements or who have software licensed by core will benefit from the added performance of the Power10 processor in IBM Cloud. Additionally, existing clients looking for Power10 in IBM Cloud to align with their on-premises environments for application development, testing and/or back-up and disaster recovery or new clients adopting IBM Power Virtual Server for the first time can feel comfortable that they can choose IBM’s most current technology along with the most current operating systems.

“IBM has been a longtime trusted provider for our core systems, and we are excited to explore the new functionality that IBM is bringing to the Power10 platform that enables modern software approaches specific to machine learning,” said Ben Metz, Chief Digital and Technology Officer at Jack Henry, a leading financial technology services company. “With real-time inferencing, IBM Power has the potential to bring insights and decision-making closer to our mission-critical data that is an essential part of our technology modernization strategy.”

Read More  Helping Companies Prioritize Their Cybersecurity Investments

Clients can also benefit from Multi Architecture Cluster (MAC) support, allowing them to combine IBM Power and x86 worker nodes in a single Red Hat OpenShift cluster. The use of MAC helps clients align the right AI task with the right infrastructure, addressing the need for every AI task to run on a single platform so they can create applications where necessary to take to production quickly. Additionally, IBM Cloud Pak for Data 4.8 intends to expand support for IBM Power in Q4 of 2023, bringing the latest data science components (like Watson Machine Learning, Watson Studio, Analytics Engine Powered by Apache Spark, Data Refinery, and Decision Optimization) to Power for enterprise clients.

Working with our ecosystem partners to bring AI capabilities on IBM Power to clients

For AI to have a meaningful impact, it often takes a team effort. Building on Salesforce’s MuleSoft and IBM relationship, MuleSoft and IBM are in discussions to support Anypoint Flex Gateway on IBM Power in the first quarter of 2024. MuleSoft Flex Gateway is an Envoy-based, API gateway designed to manage and secure APIs running anywhere. By supporting Flex Gateway on IBM Power, clients can API-enable their Power-based applications for secured access with applications within their enterprise. With this, clients can modernize, share and connect mission-critical data on Power applications with enterprise applications to enhance AI models using watsonx.

Make the most of your data with AI on IBM Power

As I said at the beginning, there is no one-size-fits-all approach for your AI infrastructure. I’ve found that clients benefit most when we co-create and align the right infrastructure to the right AI task at hand.

The perfect partner on this journey is IBM Consulting. They are working directly with global clients and partners to co-create what’s next in AI to fuel business transformation. These dedicated consultants will bring domain expertise to help clients leverage tools to create watsonx models in SAP environments and can also help clients who are considering running SAP HANA workloads on IBM Power or IBM Power Virtual Server to accelerate application deployments.

Read More  Anthropic’s Claude 3 Opus and tool use are generally available on Vertex AI

Statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only.


[1] Comparison based on IBM internal testing of question-and-answer inferencing using PrimeQA models (https://github.com/primeqa, based on Dr. Decr and ColBERT models). Results valid as of August 22, 2023, and conducted under laboratory conditions, individual results can vary based on workload size, use of storage subsystems and other conditions. Comparison is based on total throughput in score (inferences) per second on IBM Power S1022 (1×20-core/512GB) versus Intel Xeon Platinum 8468V-based (1×48-core/512GB) systems. Test was run with Python and Anaconda environments including packages of Python 4.0 and PyTorch 2.0. The Python libraries used are platform-optimized for both Power and Intel. Configuration: OMP-NUM-THREADS = 4; batch size = 60 and 40 concurrent users; IBM Power S1022 – 6.26 batch queries inferenced per second with 40 concurrent users; Sapphire Rapids 8468V – 4.4 batch queries inferenced per second with 40 concurrent users
IBM S1022 Power system: https://www.redbooks.ibm.com/abstracts/redp5675.html
Compared x86 system: Supermicro SYS-221H-TNR system: https://www.supermicro.com/en/products/system/hyper/2u/sys-221h-tnr
Models fine-tuned by IBM on a corpus of IBM-internal data.

[2] Based on IBM internal testing of question-and-answer inferencing using PrimeQA models (https://github.com/primeqa, based on Dr. Decr and ColBERT models). Results valid as of August 22, 2023, and conducted under laboratory conditions, individual results can vary based on workload size, use of storage subsystems and other conditions. Extrapolated result for an IBM Power S1024 (2×12-core 3.4-4GHz/512GB) is based on a measured inferences time of 1.008 seconds on IBM Power S1022 (2×12-core 2.9-4GHz/512GB). Test was run with Python and Anaconda environments including packages of Python 4.0 and PyTorch 2.0. The Python libraries used are platform-optimized libraries for Power. Configuration: OMP-NUM-THREADS = 32; batch size = 1.

IBM S1024 Power system: https://www.redbooks.ibm.com/abstracts/redp5675.html

Models fine-tuned by IBM on a corpus of IBM-internal data:
https://github.ibm.com/systems-cto-innovation/ai-on-ibm-systems/tree/master/primeqa/inference

[3] Comparison based on single 8-socket systems (IBM Power E1080 3.55 – 4 GHz, 120 core, AIX and Superdome Flex 280 2.90 GHz, Intel Xeon Platinum 8380H) using published results at www.spec.org/cpu2017/results/ as of 02 September 2021. SPEC® and the benchmark names SPECrate®2017_int_base and SPECrate®2017_int_peak are registered trademarks of the Standard Performance Evaluation Corporation. For more information about SPEC CPU 2017, see www.http:/spec.org/cpu2017/.

By: Bargav Balakrishnan
Originally published at: IBM Blog

Source: cyberpogo.com


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!
Share this article
Shareable URL
Prev Post

Microsoft And Adobe Partner To Deliver Cost Savings And Business Benefits

Next Post

Applying Generative AI To Product Design With BigQuery DataFrames

Read next