Much of What Kids Need to Know to Get Hired Today Is Not Taught in School

People in school, and parents with kids in college. Take note. This is my day in high tech hiring:
“This candidate for an entry level engineering position is not a contributor to any open source software projects.”

“This candidate for an entry-level marketing job doesn’t know the APIs of online services like StackExchange, Reddit, and Github”

This is the standard to get hired today. They don’t teach this stuff in school. The good news is that you don’t need to be in school to learn this stuff.

Pivotal Releases GemFire 8.1 With Updates and New Features

Cross posted from The Pivotal POV Blog…

cubes

Pivotal GemFire 8 was the first major release of the in-memory distributed database since it joined Pivotal’s portfolio of products. Today, we’re announcing the release of Pivotal GemFire 8.1. Part of the Pivotal Big Data Suite, Pivotal GemFire enables developers to deploy their big data NoSQL apps upon a massive scale. In addition to incremental product improvements, 8.1 enhances GemFire’s availability and resilience within a distributed system, and improves upon its management and monitoring features.

Allowing High Availability, Resilience, and Global Scale

[Read more…]

GemFire XD 1.4 Now Available for Download

Cross posted from The Pivotal POV Blog…

GemXD

The latest release of GemFire XD, version 1.4 is now available for download. its biggest improvements include single hop inserts for 50% faster performance, and support for JSON document objects in SQL tables. This makes GemFire XD even better for write-intensive use cases, such as high-speed ingest. Also, now we can support use cases that need more schema flexibility to the otherwise well-defined relational structure of GemFire XD.

[Read more…]

10 Amazing Things to Do With a Hadoop-Based Data Lake

Cross posted from The Pivotal POV Blog.

The following is a summary of a talk I gave at Strata NY that is proving popular among a lot of people who are still trying to understand use cases for Apache Hadoop® and big data.  In this talk, I introduce the concept of a Big Data Lake, which utilizes Apache Hadoop® as storage, and powerful open source and Pivotal technologies. Here are 10 amazing things companies can do with such a big data lake, ordered according to increasing impact on the business.

1

[Read more…]

Why is Uber’s Surge Pricing Such a Big Deal?

For all the flak that Uber rightfully deserves on many recent issues – I don’t know why people get bent out of shape about surge pricing. The point is for them to incentivize more of their drivers to work by paying higher than normal rates for trips. These drivers do not work set hours. It’s their choice when to work.

In many cases, this could be better than over time pay.

The other choice is a set price, but no drivers. Ever notice you can never get a taxi at rush hour in NYC?

TEDx Talk: “What’s the Big Deal about Big Data for Humans?”

I gave this TEDx talk to a general audience of folks at the SJSU TEDx event. It’s an interesting job trying to explain fairly complex technical concepts to a mixed audience of people of many different ages and backgrounds. It helps one realize just how much we depend on a common vocabulary and understanding in the IT industry.

And in the tradition of many TED talks, the point is to motivate an action, not just educate.

How do you think I did?

And here are the slides…

Announcing the New Version of GemFire XD and SQLFire: Pivotal GemFire XD 1.3

Cross posted from my blog at Pivotal POV…

 

 

The newest versions of SQLFire and GemFire XD are one and the same: Pivotal GemFire XD version 1.3. What were previously two separate products are now merged, so current licensees of either product are entitled to upgrade to the new version.

[Read more…]

Do you need a college education to have an IT career ?

Some very good advice from a former colleague I have a lot of respect for.

Vijay's thoughts on all things big and small

One of the most common questions I get from my younger colleagues and mentees is about the value of college education in pursuing an IT career .

In my early 20s, I had one big regret – I did not go to a big name US college to get a degree . My engineering and MBA degrees were from University of Kerala in India . A lot of my friends did take their degrees – some times a second degree even – from reputed US schools . I had some kind of an inferiority complex about that when I started out – but I got over it at some point soon.

Education in India is not as expensive as it is in the US . Four years of engineering college and two years of MBA together cost about $5K including food, travel, hostel and so on . My parents picked…

View original post 1,579 more words

What’s New in Pivotal GemFire 8

Reposted from Pivotal POV….

featured-gemfire-8

On September 23, 2014 Pivotal announced the release of Pivotal GemFire 8, part of Pivotal Big Data Suite. This is the first major release of GemFire since it became part of the Pivotal portfolio.

Born from the experience of working with over 3000 of the largest in-memory data grid projects out there, including China Railways, GIRE, and Southwest Airlines, and  we’ve invested more into the needs of the most demanding enterprises: more scale, more resilience, and more developer APIs.

This release is a significant enhancement for developers looking to take their big data NoSQL apps to massive scale. For the complete technical details, you can check out the new datasheet, and official product documentation.

Here’s what’s new, sorted by the 5 areas that GemFire does best in the industry:

Providing Scale Out Performance

This is why most of Pivotal’s customers begin looking at GemFire in the first place—because they can’t make traditional RDBMS’s scale with the number of concurrent transactions and data they need to manage.

Pivotal GemFire manages data in-memory distributed across multiple systems on commodity hardware—100’s of nodes if you like—in a shared-nothing architecture.  So there’s plenty of compute and memory to host all your data to get real-time response.

WHAT’S NEW

We’ve added in-memory compression, effectively giving each node the capacity to hold up to 50% more data. Compression is achieved through Snappy, a speed-optimized algorithm, although the compression codec is replaceable to whatever algorithm you want to use.

Maintaining Consistent Database Operations Across Globally Distributed Nodes

[Read more…]

Our Customers at Pivotal Recognize the Importance of Bridging Traditional Data Warehousing into Next Generation Platform

Cross posted from my blog at Pivotal POV:

Recently Gartner published the report, “Gartner Critical Capabilities for Data Warehouse Database Management Systems” that shares survey results of customers from a variety of Data Warehouse solution vendors.  The report ranks vendors in 4 categories of use cases in the Data Warehouse market: “Traditional Data Warehouse”, “Operational Data Warehouse”, “Logical Data Warehouse”, and “Context Independent Data Warehouse.”

Based on existing customer implementations and their experiences with data warehouse DBMS products, the report scored Pivotal in the top 2 out of 16 vendors in two use cases: “Traditional Data Warehouse” and “Logical Data Warehouse”.  In a third use case, “Context Independent Data Warehouse”, Pivotal scored in the top 3 relative to the 15 other vendors.

In the report, Gartner writes “the adoption rate for modern use cases (such as the logical data warehouse and the context independent warehouse) is increasing year over year by more than 50%—but the net percentage for the context independent and logical data warehouse combined remains below 8% of the total market.”

Modern Data Warehouse Use Cases Generate Trillions in Value

Many of Pivotal’s big data analytics customers started out as Greenplum Databasecustomers. These customers are both well established in traditional data warehousing techniques and take advantage of modern data warehousing scenarios supported by Greenplum Database’s advanced analytics capabilities, and other products of Pivotal Big Data Suite: Pivotal HAWQ and Pivotal HD.

Industry leaders like General Electric are using Pivotal Big Data Suite to create new solutions that cut weeks of analysis time that would be required using traditional data warehouse approaches. For example, a process for refining insightful analytics from sensor data streams generated by industrial machinery was compressed from 30 daysto just 20 minutes.

Other companies are using these approaches to improve customer retention, target advertising, detect anomalies, improve asset utilization and more. The combined potential benefit of these opportunities is staggering. GE alone predicts its solutions will boost GDP by $10-15 trillion in the next 20 years by saving labor costs and improving energy efficiency. [Read more…]