A Fast Shift to Digital to Reach Kinetica’s End Users

I’d been paying attention to the news unfolding as SARS-COV2 began spreading around the world in February and March. It was when Mobile World Congress in Barcelona was suddenly cancelled that I really understood how COVID was going to radically reshape our world so quickly.

Over the next few weeks, conference after conference was cancelled or turned into being a digital only experience. For those early digital-only conferences, we were talking a very small subset of content with no partner exhibition or speaking opportunities. This was a problem for Kinetica since the next six months of our marketing calendar was anchored in some important partner conferences.

The marketing team rapidly revamped Kinetica’s demand generation strategy to be 100% digital. They asked me to produce an end-user education series to help practitioners such as data scientists, data engineers, geospatial analysts, and application developers understand the kinds of problems Kinetica can help them solve.

With the help of Kinetica’s amazing technologists and our plucky demand generation team, we cranked out a huge curriculum of talks featuring Kinetica’s many subject matter experts:

Kinetica can best be thought of as a convergence of real-time data warehouses and context-independent data warehouses, with some additional capabilities to support deployment of machine learning models at scale, and to help developers create analytics-driven applications. In Gartner’s updated view of the DBMS market, Kinetica is best suited for event stream processing use cases, and for creating Augmented Transaction Processing solutions.

You’ll find that most of the talks are geared towards data engineers who would be tasked with designing an event stream processing pipeline at scale. For example, such a data engineer will want to enable analysis of data in real time such as time series sensor feeds from IOT devices, or change-data-capture (CDC) messages from upstream transactional systems. The data engineer would collect this data with an event stream processing platform such as Apache Kafka, which can rapidly feed data into Kinetica. Once the data is ingested by Kinetica, the data engineer would set up real time data transformation and feature calculation, and then have these features fed into a deployed ML model for scoring.

We also have a few talks for geospatial analysts and application developers. Generally, GIS users will be quite comfortable with Kinetica’s capabilities, especially if they are already familiar with Postgres w/ PostGIS. Developers come later to a Kinetica project compared to other end users. They’ll be tasked with connecting their application to Kinetica via the REST API, or display the map visualizations rendered by Kinetica.

I’d like to give a special thanks to all my colleagues that helped us roll out so much amazing educational content over a short period of time.

Greenplum Experts Panel, Greenplum Operations at Scale – Greenplum Summit 2019

Slides from a panel I hosted with some of Pivotal Greenplum’s largest customers.

How to Meet Enhanced Data Security Requirements with Pivotal Greenplum

Cross posted from The Pivotal Blog.

As enterprises seek to become more analytically driven, they face a balancing act: capitalizing on the proliferation of data throughout the company while simultaneously protecting sensitive data from loss, misuse, or unauthorized disclosure. However, increased regulation of data privacy is complicating how companies make data available to users.

Join Pivotal Data Engineer Alistair Turner for an interactive discussion about common vulnerabilities to data in motion and at rest. Alastair will discuss the controls available to Greenplum users—both natively and via Pivotal partner solutions—to protect sensitive data. We’ll cover the following topics: – Security requirements and regulations like GDPR – Common data security threat vectors – Security strategy for Greenplum – Native security features of Greenplum

Speakers: Alastair Turner, Data Engineer & Greg Chase, Business Development, Pivotal

Boost Greenplum BI Performance with Heimdall Data

Jeff Kelly talking to Greg Chase and Eric Brandsberg at Greenplum Summit Built to Adapt interview.
Eric Brandsberg, CTO of Heimdall Data and Greg Chase, Business Development Lead at Pivotal, chat with Jeff Kelly about how Heimdall Data can help you boost Greenplum BI Performance.

Cross posted from the Pivotal Blog.

One of the more interesting startups I’ve run across recently is Heimdall Data. They are a “SQL-savvy” platform that elegantly slips into your application instance for improved data performance and reliability of backend data sources.

Here are some use cases for Pivotal Greenplum that we have been developing with Heimdall:

  • SQL Caching — This will be super helpful for a Pivotal Greenplum instance when it supports traditional BI users who make repeated queries. Heimdall Data auto-caches and auto-invalidates SQL results into Pivotal GemFire without code changes. Users will experience faster response times while freeing up the system workloads for what your data scientists might run.
  • Automated Master Failover — Typical Pivotal Greenplum deployments have a standby master node. However, actually failing over to the standby in the event of a failure of the active master node is a manual process. Heimdall can automate failover to the standby master node Greenplum. Heimdall is an enhancement over Pgpool as it handles failover behind the scenes without the need for development work.
  • SQL Traffic Manager — We also think Heimdall could be useful for customers that want to combine OLTP with their analytics, such as an HTAP application. In this case Heimdall determines whether SQL operations it received should be executed by Pivotal Greenplum, an associated PostgreSQL database, or even processed by both. This provides the kind of performance that transactional applications expect, and also means the Pivotal Greenplum data warehouse has access to up to date data. Like most of Heimdall’s approaches, this solution also slips in, and requires no code changes to your SQL queries to work.

Listen to our interview with Heimdall CTO Eric Brandsberg from Greenplum Summit 2018 in Jersey City this last May.

Boost Greenplum BI Performance with Heimdall Data was originally published in Built to Adapt on Medium, where people are continuing the conversation by highlighting and responding to this story.

Demand Your On-Demand RabbitMQ Clusters Now

Announcing a New Version of RabbitMQ for Pivotal Cloud Foundry

Cross posted from The Pivotal Blog.

What developers love about RabbitMQ is that it is “messaging that just works.”  In other words, they can focus on using communication in their applications — not on minutiae such as establishing connections or having to worry about guaranteeing availability of senders and receivers.

But let’s face it: in the cloud-native era, “just works” is a progressively higher bar to reach.  Historically, RabbitMQ was easier to get running and operate than legacy messaging middleware. Today, developers expect the service to “just be there” when they want it.

Today, we’re proud to announce an important new capability of RabbitMQ for Pivotal Cloud Foundry (PCF.) Developers can now provision their own dedicated clusters of RabbitMQ with a CF create-service call.  This reflects our focus on continuing to improve developer self-service automation in Pivotal Cloud Foundry.

Typical Use Cases for RabbitMQ for PCF

RabbitMQ for Pivotal Cloud Foundry is an integrated distribution of RabbitMQ that allows Pivotal Cloud Foundry platform operators to provide a “messaging-as-a-service” offering to their application developers. RabbitMQ for PCF implements the On-Demand Service Broker interface, which provides a high degree of self-service automation. This allows developers to provision their own instances of RabbitMQ clusters. Typically developers will instantiate these dedicated RabbitMQ clusters to provide communications for new cloud-native as well as migrated and replatformed applications.

[Read more…]

The Fastest Way to Redis on Pivotal Cloud Foundry

Cross posted from The Pivotal Blog.

What do developers choose when they need a fast performing datastore with a flexible data model? Hands-down, they choose Redis.

But, waiting for a Redis instance to be set up is not a favorite activity for many developers. This is why on-demand services for Redis have become popular. Developers can start building their applications with Redis right away. There is no fiddling around with installing, configuring, and operating the service.

Redis for Pivotal Cloud Foundry offers dedicated and pre-provisioned service plans for Cloud Foundry developers that work in any cloud. These plans are tailored for typical patterns such as application caching and providing an in-memory datastore. These cover the most common requirements for developers creating net new applications or who are replatforming existing Redis applications.

We’d like to invite you to a webinar discussing different ways to use Redis in cloud-native applications. We’ll cover: – Use cases and requirements for developers – Alternative ways to access and manage Redis in the cloud – Features and roadmap of Redis for Pivotal Cloud Foundry – Quick demo

Presenters: Greg Chase, Director of Products, Pivotal and Craig Olrich, Platform Architect, Pivotal

Found at CF Summit: The Secret Pipelines of Agile Cloud Operators

Cross posted from my LinkedIn articles.

Last week, more than 1400 developers and IT operators converged on Santa Clara to attend the 2017 Cloud Foundry Summit. They came to learn about the latest advances in the Cloud Foundry platform and approaches for helping development teams to deliver software faster.

I was following a deeper thread: how can we help operators of Cloud Foundry to become more agile themselves in delivering the CF platform?

Like any platform, the steps for installing or updating Cloud Foundry involve a long serial set of tasks. For example: you set up and deploy infrastructure, and install and configure official releases of the software. Depending on the actual project, there may also be backups of prior state, data and app migration, and a battery of smoke tests, and regression tests to conduct. Then a move to production: rebinding applications, cutting over to new versions of servers etc. 

For many customers running Cloud Foundry at scale, this is repeated several times with slight differences to each of their several Cloud Foundry deployments.

Take for example, Verizon Wireless who runs 12 Foundations of Cloud Foundry hosting more than 100 apps and 4000 containers.

[Read more…]

Orchestration Patterns for Microservices with Messaging by RabbitMQ

Companies looking to speed up their software development are adopting microservices architectures (MSA). Building applications as groups of smaller components with fewer dependencies helps companies such as Comcast, Capital One, Uber, and Netflix deliver more frequent releases and thus innovate faster.

An important consideration in adopting an MSA is deciding how individual services should communicate between each other. Adding a message queue such as RabbitMQ to handle inter-service messages can improve communication by:

  • Simplifying our services so they only need to know how to talk to the messenger service.
  • Abstracting communication by having the messenger service handle sophisticated orchestration patterns.
  • Scaling message throughput by increasing the cluster size of the messenger service.
  • Requirements for communicating between microservices

In this webinar we’ll discuss:

  • Typical messaging patterns in microservice architectures
  • Use cases where RabbitMQ shines
  • How to use the RabbitMQ service for Pivotal Cloud Foundry to deploy and run your applications

We’ll also demonstrate how to deploy RabbitMQ in Pivotal Cloud Foundry, and how to incorporate it in microservices-based applications. Presenters: Greg Chase, Pivotal and Dan Baskette, Pivotal

RabbitMQ: What’s New & Changing after 10 Years of Application Messaging?

Cross posted from The Pivotal Blog.

10 years ago, RabbitMQ was first released to open source. Since that time, RabbitMQ has grown to become the most widely deployed open source message broker.

Whether you’re familiar with RabbitMQ or just learning, you’ll want to tune into this webinar with Daniel Carwin, development manager for the RabbitMQ team. Learn about the latest capabilities of RabbitMQ, and hear the future vision of how it will evolve to meet tomorrow’s application needs.

We’ll cover the following:

  • Brief history of RabbitMQ
  • Core design principles and how they help today’s applications
  • Common use cases and patterns
  • Your questions answered Latest features of RabbitMQ:
  • Messaging
  • Language support
  • Distributed deployment
  • Enterprise & cloud support
  • Management and monitoring
  • Future vision and roadmap for RabbitMQ

Presenters: Daniel Carwin, RabbitMQ Development Lead, Pivotal & Greg Chase, Pivotal

Watch the webinar: https://content.pivotal.io/webinars/rabbitmq-whats-new-and-changing-after-10-years-of-application-messaging?utm_source=pivotal-brighttalk&utm_medium=webinar-link&utm_campaign=10-years-of-rabbitmq_q117

Apache Geode Graduates to Top Level Project in Apache; Up Next: Microservices

Cross posted from The Pivotal Blog.

Just eighteen months ago Pivotal granted over 1 million lines of code from the Pivotal GemFire code base to The Apache Software Foundation (ASF) to help create the Apache Geode project. We made this decision because we saw many of our enterprise customers gravitating towards open source software-based solutions as part of larger IT modernization efforts. These customers understand products based on open source projects often evolve faster than their closed-source counterparts, provide more roadmap and release transparency, and reduce the risk of vendor lock-in. We couldn’t agree more.

Apache Geode began as a podling in the Apache Incubator project. Since then, an active community of contributors has grown around the project, helping improve and add capabilities to the in-memory data grid. Given the role in-memory data grids and event-driven architectures play in building cloud-native applications, the interest in Apache Geode is no surprise.

It’s been an active and fruitful year for Apache Geode. The first Apache Geode Summit was held in March 2016 and drew close to 100 community members from enterprises including Bloomberg, Southwest Airlines, Murex and TEKsystems, among others. In October 2016, Apache Geode 1.0 was released, a major milestone for any open source project. And now, the Apache Geode community has achieved an important milestone! Pivotal is thrilled that just this week the ASF graduated Apache Geode to a Top-Level Project (TLP), “signifying that the project’s community and products have been well-governed under the ASF’s meritocratic process and principles.” We extend a hearty congratulations to the Apache Geode community, without whom TLP graduation couldn’t have been achieved. This truly was a community effort, representing the best of open source and the “Apache Way”!

Of course, Apache Geode’s graduation to TLP is just one step in a longer journey to help enterprises across industries support mission critical applications with a modern, open-source based, in-memory data grid. This includes supporting an event-driven and microservices-based approach to application architecture. Microservice are autonomous across a network boundary, and inter-microservice communication is best handled through an event-driven approach – a sweet spot for Geode.

Geode and Microservices

Over the last few years, a microservices-based approach has emerged as the ideal way to build web-scale, applications, and more and more enterprises are looking to adopt this pattern for their custom software. The benefits of this approach are compelling, given the availability, scale, and speed that companies like Netflix have achieved.

Here at Pivotal, we’ve been at the forefront of the rise of microservices, with Spring Boot as the de facto Java framework, Spring Cloud Services based on Netflix OSS, and Pivotal Cloud Foundry for continuous integration and delivery. So, how does Geode fit in with this movement?

It turns out Geode is a perfect complement to a microservices architecture thanks to joint development work from the Spring and GemFire teams, the results of which have been extended into Spring Data Geode. In terms of microservices, there are implications for how data is managed when breaking applications down into specific, autonomous services that can be developed and scaled independent of one another, including how data is communicated and shared between different microservices.

From an architectural perspective, an event-based architecture is ideally suited for interactions between these loosely coupled microservices. Fortunately, Geode was designed for event-based architectures and naturally supports a microservices application development approach. The Geode Continuous Query feature, for example, can be used within each microservice for specifying state changes the microservices wants to subscribe to. The events that are of interest to a microservice can be easily specified using OQL. This convergence of features in Geode that fulfill the requirements of microservices is no accident – it is part of Geode’s design. The blazing fast performance of in-memory computing, coupled with an event-based architecture, make Geode an attractive choice for supporting microservices-based applications.

Geode also benefits from over ten years of development (as GemFire), maturing within numerous enterprises, and now advancing through community-based, meritocratic development. The seemingly daunting data challenges of microservices can be met without compromising the benefits of microservices.

Again, a big congratulations to the Apache Geode community for achieving this historic milestone! We at Pivotal look forward to continuing to work closely with the community and Pivotal customers to move the project forward. The sky’s the limit for Apache Geode! Interested in learning more about Apache Geode? Check out this five minute Geode tutorial, ask questions on the Geode mailing lists or on StackOverflow, or download Geode here.