Posts Tagged ‘ targeted data lists ’

Data Brokers and the Driving Force behind the Data Economy


TARGETED DATA LISTS

 

With all the talk of data out there, who is actually using it and what are they using?  Turns out, most data used in the enterprise today comes from internal applications. That is starting to change, and the trend will accelerate.  But for now, when asked which data types were important to their firm’s overall business strategy, the majority of business intelligence users and planners cited internal sources such as transactional data from corporate apps or other customer data. Only about 1/3 of respondents reported the importance of external sources such as scientific data, partner data or other 3rd party data.

Fewer still used unstructured external data such as Twitter feeds or other social media sources. Current data sources are limited.  Yet both business and IT decision-makers recognize the need to improve their use of data:  56% of business and IT decision-makers surveyed by Forrester see improving the use of data and analytics to improve business decisions and outcomes as a top priority. And, that potentially includes expanding the use of external data… if they can find it.

Where do they go for external data?  What types of data might complement their transactional and other internal data? How can corporate strategists and market research teams identify new sources of information? Where can they find them, and how can they acquire and consume them? Can they be combined with internal data? Are the sources safe? reliable? sustainable?

We’re kicking off research here at Forrester that looks at data providers and enablers in the new data economy. Some are veterans of the data industry like Lexis Nexis  or Dunn & Bradstreet.  But others are new players who have entered the market to help facilitate the exchange and use of data.

For example, Enigma is a new search and discovery platform for public data – note they do not say “open data” but rather any data that is available to the public whether enterprise data, scientific data, academic data or open government data.  They describe the “public data paradox” in which data is out there and available but not accessible.  Public data remains in diverse formats – although that will slowly change with new government mandates for API access – and is not yet indexed and searchable.  As they put it “you are limited to the data you know about” and you can’t see the connections among different data sets.  At Enigma, they have built an infrastructure for acquiring, indexing and searching public data.  It makes it easy to find data on a particular subject, and most importantly find data sets you didn’t even know about. As they say, “a lot of people have been pioneering how we analyze the world with data” but what was missing was how to find that data.  They want to be the “Google” for public data.

Another example on my side of the pond is Data Publica. Like Enigma, they help businesses acquire data, yet with less emphasis on self-service or a “Google” approach.  Data Publica will help identify data sources, will extract and transform the raw data into a usable structure, and will deliver data as a service.  Delivery mechanisms range from dashboards and reports to data sets or streams, either as a one-off purchase or through subscription access.  Customer solutions include regional dashboards used to determine market opportunity and RFP notifications alerting potential bidders of a new request.  Small businesses who might not have the internal resources to watch the websites of multiple government agencies on a daily basis benefit from the RFP aggregation Data Publica provides by pulling data on public RFPs from over 100 sources.  Getting a timely alert that an RFP has been issued can be the key to winning the bid.

DataPublica also assists the other side of the equation, the producers/owners of public data.  In the City of Nantes, DataPublica provided guidance in the launch of the open data initiatives, advising on data structure, as well as the delivery of data through both visualizations and APIs.  Public organizations may produce data, but they face the perennial challenge of bringing it to market.

Given the skills shortages and difficulty in recruiting data expertise, enablers like Data Publica and Enigma are key to the development of the data economy. Stay tuned on more Forrester research on these enablers of the data economy.  Posted by Jennifer Belissent, Ph.D.

“Your Turnkey Solution to Leads and Data”

http://www.apexdm.net

3 Reasons Why “Big Data” Isn’t Really All THAT Big


Quality Payday Leads

Over the last couple of years, Big Data has been unavoidable. It’s not just big, it’s massive. If you throw a stone down the streets of London or New York, you’ve got as much a chance of hitting a big data guru as you do a social media guru.

Undoubtedly, there is great power in data, but is Big Data all it’s cracked up to be?

50% of my brain thinks Big Data is great, and 50% of me thinks it’s a neologism. I’ve found it difficult to reconcile all of the varying information out there about it.

So join me for the first part of a two-part series looking at Big Data. In part one, I’ll look at Three reasons why Big Data is a big load of baloney. And next week in part two, I’ll look at Three reasons why Big Data is awesome.

1. Big trends are trendy

My pet rock still hasn’t moved, and my Tickle-Me-Elmo still won’t shut up. And also, Big Data is big, at least according to Google Trends:

Targeted Data
Source

Some other terms once synonymous with the inter-web were pretty trendy too. Remember this one?

Auto Finance Leads
Source

The adoption curve of the term “web 2.0” looks quite similar to where we are now with Big Data. And yet, if you still use the term “web 2.0” in your job, then you probably think the Fresh Prince still lives in West Philadelphia. (He doesn’t.)

The thing about Big Data is that it really isn’t anything new. Cluster analyses, propensity modelling, neural networks and the like have been in use in the marketing sphere for quite some time.

The phrase used a few years ago for this sort of stuff was ‘business intelligence’

STUDENT LOAN DEFAULT LEADS
Source

But now, we don’t care about business intelligence anymore. Who needs intelligence? It’s over-rated. Like Goethe said, “All intelligent thoughts have already been thought”.

And yet, Big Data is everywhere. Why shouldn’t it be? It’s BIG. However, you ask 10 people what Big Data means, you’ll get 10 answers, none of which make much sense.

Maybe it’s because of this:

EDU LEADS
Source

We’ve all seen Moneyball and read Nate Silver’s blog. There are people out there who are better at statistics than you. And this is scary.

So what’s the solution? Throw a bunch of money at Big Data, whatever it is, and sleep soundly knowing that you’ve gainfully employed a math graduate.

And therefore, Big Data is a big load of baloney.

2. Missing one V

Gartner defines Big Data as requiring Three V’s: Volume, Velocity, and Variety. So let’s look at this a bit deeper.

Volume of data: for sure, there’s loads of data out there. Huge amounts. Check.

Velocity of data: yep, data is moved around in large quantities faster than ever before. Check.

Variety of data: in most digital marketing ecosystems, there are the following types of data (yes, I know there are more, but for the sake of argument bear with me):

  • Site stats.
  • Email engagement stats.
  • Mobile/SMS stats.
  • Past purchases.
  • Demographics, preferences etc.

And within each of these, the options are finite. For example, in email, most people measure (at the very least) opens, clicks and conversions. That’s three types of data.

And for all of the other areas above it’s the same. For the sake of argument, let’s say that we’ve got 30 types of data in total.

This is the thing. 30 types of structured data. Processing this data doesn’t require a super-computer, it simply requires robust statistical methodology.

So, if you’re a digital marketer, what you actually have is ‘a few sets of structured, small data’, not ‘Big Data’.

And therefore, Big Data is a big load of baloney.

3. You can perfectly predict the past

With the beginning of the National Hockey League’s 2013-14 season fast approaching, I’ve been spending a lot of time lately trying to determine the best bets to place on the eventual winner.

And of course, it seems Big Data is the best route to my next million dollars. (Btw if anyone is interested in joining my hockey pool then drop me a line – go-live is 1st October!)

I downloaded as many team statistics as I could from last season and embedded them into a spreadsheet. It included rudimentary statistics such as Goals For and Goals Against, right through to Winning % when trailing after two periods, CORSI 5v5, and defensive zone exit rate.

Then I ran a multiple regression and removed non-causal variables. I perfected the model such that the formula spat out expected point totals that were on average within 0.5 points of the actual result.

When I plugged in the raw data from the previous season, the outputted expected results weren’t even close to the actual results.

This is a perfect case of what is called ‘over-fitting’.

When you have a lot of data, the urge is to use all of it and create an uber-complex, bullet-proof formula. Take all of your data points and find the trendline that touches everything. But there’s an inherent problem with this – all you’ve done is create a formula to perfectly predict the past.

The risks that come with an over-fitted model are twofold:

  1. You are assuming that the future will      be the same as the past.
  2. Adding or removing variables becomes      extremely difficult and risky.

So despite there being lots of data out there, the dominant strategy is to focus on the causal variables. In the hockey allegory above, while I won’t reveal my secrets, two of the stronger predictors of eventual success are goal differential and shot differential.

Not rocket science, I know – if you take more shots than your opponents you’ll generally score more goals than your opponents. However, I did learn to remove strictly correlative variables (such as Faceoff Win %, PDO and punches thrown).

Instead of focusing on Big Data and its billions of variables, I’m instead focusing on a small amount of variables that actually matter.

Within your organisation, what are your causal variables? By looking at all the Big Data available to you, you run the risk of the truly valuable signals being obfuscated by irrelevant correlates.

And therefore, Big Data is a big load of baloney.

Disagree?

I do too. Well, 50% of me does. Feel free to elaborate on your point of view in the comments section below.

Parry Malm is Account Director at Adestra and a guest blogger on Econsultancy. Connect with him on LinkedIn or Google+.

Topics:Data & Analytics

by caesararum

WWW.APEXDM.NET  “Your Turnkey Solution to Leads & Data”

USPS Makes Simplified Address Direct Mail Trial Permanent


The US Postal Service is asking regulators to allow its simplified direct marketing service for small businesses to become a permanent offering.

The Every Door Direct Mail service has proved successful since the start of trials last year, USPS told the Postal Regulatory Commission as it filed a request to add the programme to its portfolio of market-dominant products.

EDDM allows small businesses to use Standard Mail to send out advertising materials to every residential address on a carrier route, sending out up to 5,000 mailpieces at a time without requiring a mailing permit.

The key to the success of the saturation mail service is the ease with which an SME can select which carrier route or routes in which to distribute marketing materials through an online tool. Items are then dropped off at the customer’s local post office.

Along with simplified rules, EDDM is seen as an important way to bring onboard small businesses who have not used the mail as a marketing channel before because they lack staff with specific direct marketing skills.

USPS has also been trialling a larger scale version of EDDM for larger companies dropping mail at business mail entry units, but this week said it wants to add the retail version of the programme to its Mail Classification Schedule.

The retail programme has brought in $43m in revenues since trials began at the end of March 2011, the Postal Service said this week, while revenues since the start of April – when USPS launched a major advertising campaign surrounding the service – have already reached $38m.

Up to June, more than 32,000 small businesses had signed up to participate in the programme, while there have been more than 105,000 transactions at post offices.

USPS believes the programme will reach the $50m limit on revenue from a trial service by September.

“The market test has already demonstrated that sending advertising mail to every address within a community, with fewer rules, rates, and regulations, is a popular way to connect to potential and actual local customers,” the USPS told regulators.

Executives have said including the bulk mail version of the service, they want to see Every Door Direct Mail become a billion dollar revenue generator.

Rates to rise

The EDDM retail programme is currently priced at the Standard Mail saturation mail rate, but when it becomes a permanent fixture, USPS wants to set a 16 cent per piece rate, about 10% more than the standard saturation rate.

The new price would set the retail version of the programme as more expensive than the version for larger mailers.

So far the trial programme has proved quite profitable for USPS, with its regulatory filing suggesting that attributable costs for the service have been just under the 8 cent per piece level.

USPS said the higher rate proposed for the retail product was justified because of the added convenience for its customers of being able to drop off items at post offices and avoid paying a permit fee.

Source: Post&Parcel/PRC

www.apexdm.net