Red Olive Blog

Thinking like Google: spotting patterns in your data, and learning what to ignore

Published: Jul 3rd, 2017 | Author: Jefferson Lynch

spotting-data-patterns
Gathering data has never been quicker nor storing it ever cheaper. The tricky part is understanding what it can really tell you: identifying what’s of use, and what you can safely ignore.

We always advise our clients to think of their business needs first, and only then start digging into the data they have.

Using data to innovate your industry

Today, successful businesses turn to data to challenge preconceptions and innovate. It’s an approach that Google has taken very successfully when considering entering a new market. It hires experts to help it understand the industry’s perceived wisdoms and limiting beliefs, then searches for patterns in large sets of data to challenge and either confirm or disprove them. If it achieves the latter, it launches a product to answer that need and profits from the result.

The approach works just as well with physical products.

Take the ice cream industry, which long believed that ice cream products that included good quality chocolate were just too difficult to manufacture; some of you are probably old enough to remember the choc ice, which dominated the British ice cream market for years despite its thin, tasteless “chocolate flavoured” coating.

In the absence of sufficient data to prove that presumption wrong, there remained a limiting belief that it was just too hard to work with good quality chocolate. Enter a market innovator, Mars confectionary.

Mars knew how to work with chocolate, but didn’t have a history of selling ice cream. When it entered the ice cream market by launching the Dove bar, a delicious high quality ice cream on a stick with a thick chocolate shell, it took the market by storm and reaped enormous rewards. Mars had studied its data – consumer demands for better chocolate – and ignored the industry preconceptions in order to open up a new market.

When data can save the day

Like Mars, our most clued-up clients act on what they consider ‘commercially interesting’ opportunities. These are the ones for which data indicates the potential to make a return, at the right level, within an acceptable timeframe. Identifying them requires two things: a clear understanding of their company’s purpose, and the ability to trust the data dispassionately, logically, and without pre-conceptions.

We advise our clients to decide what they want to find or prove before digging around in their data. They should ask whether accepted limits still apply to their industry and use the data to deliver an answer. The skill is in finding repeat patterns.

This is where Red Olive can help – first by identifying the questions your organisation ought to be asking, then sourcing and shaping the data to fit: reducing the noise helps us to shorten the task and understanding the limits of particular statistical techniques help us to advise what the data’s really saying.

That delivers a cleaner result and accurately models likely outcomes in time for our clients to implement ahead of their competitors. We can also advise where aggregate rates of change, inferred by broadening the data timeframe, would indicate that avoiding a certain course of action would actually be more profitable.

Business first, data second

By identifying a business need first, rather than starting with your data and wondering how you can use it, data scientists and organisations like Red Olive can help you to more quickly apply existing methods, as they’ll already have encountered many of the opportunities you’re considering.

There’s a higher likelihood that we’ll be able to advise on the most effective algorithms to draw out the maximum value from the data for your specific business – and help you identify which parts can be ignored.

If any company’s stated need is “make some money from this data”, we always advise them to take a step backwards, consider what their core business activity is, and how the data would help them improve or build on it. Only when they’ve identified this should they start to ask themselves how the data can help.

Are you maximising the potential of your data? Call us on 01256 83 11 00 to discuss your business needs, or email enquiries@red-olive.co.uk. We experts in identifying the value in gathered metrics for businesses of all sizes.


Red Olive’s use of Google BigQuery saves time and money

Published: May 24th, 2017 | Author: Mark Fulgoni

bigqueryWhen Google launched its big data service, BigQuery, the company had three simple goals: make it cheaper, faster and easier to use than the competition. These are lofty ambitions, but ones that we’ve found to hold up under scrutiny and real workloads.

A key benefit of BigQuery is that it is a completely hosted service. You pay small amounts of money for data storage and then for the processing time used, with Google devoting the necessary resources automatically. It’s a big shift from the old way of working, where you paid a fixed amount for virtual nodes, specified to match the maximum workload.

Huge time savings

Google further pushes its price advantage with incredible performance. Effectively, as you’re paying for processing time only, the faster you can complete a job, the cheaper it is.

When Ocado switched from using Apache Hive (a data warehouse infrastructure built on top of Hadoop), it saw that BigData was 80X faster.

Red Olive has seen similar performance gains with its clients, too. Mark Fulgoni, Principal Consultant at Red Olive, says: “One of our clients, a major media publisher, had to wait about three hours to process a day’s worth of data. Since switching to Google BigQuery, it has seem a dramatic drop in processing time, down to under 15 minutes.”

To add to this, BigQuery can now use the full ANSI SQL instruction set, letting you query your data in a familiar way. This is incredibly important as SQL-trained workers will be familiar with the BigQuery environment, so they can easily query large datasets and pull out meaningful information.

Moving to BigQuery

As a tool, BigQuery is incredibly powerful, but shifting to it isn’t quite as simple as it may seem. With its database as a service model, we’re moving into a new world where many people aren’t familiar with the best architecture for ETL (Extract, Transform, Load – the process of converting collected data into the right structure for processing). As a result, there’s a real danger that a BigQuery service could be over specified to replicate the existing ways of working. This negates the potential cost benefit of BigQuery.

Jefferson Lynch, Client Director at Red Olive says: “We are one of the few UK companies with experience of working on BigQuery – now at 18 months and counting – and many years’ experience working with databases as a service.

“As a result, our experts are well placed to engineer the ideal architecture to make the most of Google’s service. By splitting ETL jobs into lots of little packages with no dependencies, you can pay small amounts of money for each BigQuery job as you need it. This will keep your BigQuery costs on the lowest tier possible while maximising performance. With our business experience of analysing big data, we can help format and query your data, to give you the insights that can transform your business.”

To talk to us about how Google BigQuery can speed up big data transactions and reduce costs, call us on 01256 831100 or drop us an email at enquiries@red-olive.co.uk.


Agile BI, the death-knell of the traditional Data Warehouse…?

Published: Jun 10th, 2016 | Author: Mark Fulgoni

So, if you have been reading recently, you’ll know that I’ve been looking at Agile BI. (see A journey from Football to Agile BI – part 1 and part 2). For those coming straight to this article, by this I mean the new breed of end-user tools such as Microsoft’s Power BI, MicroStrategy 10, Tableau, QlikView etc. which allow business users to quickly pull together and cleanse unprepared data, known in the parlance of these tools as data wrangling, without the need for complex Extract, Transform and Load (ETL) code. All of this in addition to offering powerful visualisation, mapping and reporting capabilities.

Now that these tools have reached a reasonable level of maturity it’s worth giving some though to the impact that these tools will have on the existing, traditional Business and Management Information reporting.

Using this new breed of BI tool, Business Users can acquire, wrangle (conform, cleanse and transform), join and aggregate a very wide variety of data sources and formats, be they more traditional, well-structured sources such as from an Enterprise Data Warehouse or web pages, RSS feeds, social media or unstructured big data.

Based on this you might reasonably draw the conclusion that business people no longer needs a traditional Data Warehouse. They no longer need to wait for their IT, MI or BI department to go through lengthy analysis, design and development cycles in order to report on new areas of interest; they can “do it themselves” with these new tools that require less technical or programming knowledge.

The counter argument is that business people should not be using such tools as they lead to a wild west culture where everyone with a tool develops their own reports and tweaks them according to their own agendas, the complaint that many IT functions have had in the past about reporting via Microsoft Excel. There would no longer be any agreement between departments about what figures mean, no “one version of the truth” and lots of arguing about what this week’s sales figures really are as every department is calculating “sales” in the way that puts the best spin on it from their perspective.

But is there a better way…?

Well, the first point that I would make is that we should think twice before throwing out all the wisdom that has been accumulated regarding the reasons for building a Data Warehouse. Many of these reasons still hold true for our frequently used and/or key data.

Cleansing and conforming this data; its ineffective and inefficient for everyone wanting to use a piece of data to have to apply cleansing and conforming rules to every time they want to use it.

Consistency, “one version of the truth”; one of the driving reasons for building a centralised Data Warehouse is to ensure that key attributes and measures are agreed across an organisation ensuring a consistent reporting language and avoiding disputes regarding the lineage of key business data.

Can, and should, the traditional Data Warehouse coexist with Agile BI?

The New World…

I think that it not only can, but that it very definitely should.

There is still a need to clearly define and strongly govern the key metrics which drive any business, these belong in a structured data repository where they are clearly defined, well understood and served up in a consistent manner.

However, if we look at Agile BI alongside the traditional Data Warehouse we can start to realise its true value. If governed well it can become a significant boon to both business users and the IT functions which support them.

Agile BI can be used as a reporting platform, sourcing data from a traditional Data Warehouse but also capable of supplementing this (with data gleaned from other sources, internal data which hasn’t yet been prioritised for inclusion in the warehouse, external market data etc.). In this case, business users are able to deliver reporting to meet their own needs without needing to wait for long IT development cycles. IT should then monitor these additional data sources, some will prove to be experimental or short term, but many will need to be used regularly in order to fill an ongoing reporting need. In these instances, the data sources need to be assessed and prioritised for inclusion in the centralised data warehouse.

Mark Fulgoni is a Principal Consultant in Red Olive’s Data Practice.

Want to learn more?

Do you want to learn more about Agile BI and Data Warehousing? Why not contact the Red Olive team, we’d be happy to chat with you about how we can help you with your aims. You can reach us here.

 


A journey from Football to Agile BI (part 2)

Published: Mar 18th, 2016 | Author: Mark Fulgoni

So continuing on from last time, mapping the data was a bit more interesting. Initially I wasted some time on a silly assumption that I would need latitude and longitude details for all the football grounds, I should have realised that this new breed of tools would be more intelligent than that. I have the city field in my data, it turns out that all I really needed to do was tell the Power BI model that this was a “city”. The Model (data set) editor contains a whole set of geographical categories such as city, country, postal code etc. that you can tag columns with to enable mapping your data).

However, I quickly hit a little “gotcha” Athens in was appearing in North America, but the online help enabled me to resolve this, creating a calculated column to concatenate the city and country together put Athens firmly back as the birth place of modern democracy in Greece. Don’t you just love the US-centric view of the world…?

Next, I quickly realised that I needed to exclude home matches from the mapping otherwise the stats were so heavily biased towards London. It wasn’t hard to figure out how to apply a filter just to the map, rather than to all the graphs and tables. Even so London is still the city with the most fixtures, but given the number of London based teams in the top flight and the fact that cup semi-finals and finals are most often played there that’s no real surprise.

Mapping the data

What was more of a surprise was the top city when I filtered by European games:

Report02

Thankfully I’d got Bing to resolve Athens in Greece, otherwise it would have looked like we had been on some serious transatlantic journeys!

Conclusions

So, other than having a bit of fun with two of my obsessions, football and data, what did I learn?

Well Microsoft’s Power BI is quick and easy to use; DAX is nothing to be frightened by, if you know Excel you are most of the way there and the online help is also pretty good.

It wasn’t hard to pull in data and join it together, nor was it difficult to add new data manually, however Power BI doesn’t have some of the advanced data wrangling capabilities that some tools I’m currently investigating do have (for example MicroStrategy 10).

I also found it easy to calculate additional measures, such as the win, loss and draw percentages which need to be calculated on the fly at the appropriate totalling level.

And, all joking aside it was fun!

Next time I’ll look at and discuss the implications of Agile BI and the traditional Data Warehouse.

Take a look for yourself, the link below should take you to a public version of my report. However, whilst I keep my own stats up to date, I can’t promise that I’ll keep publishing updates.

Mark’s Power BI Football Report

Mark Fulgoni is a Principal Consultant in Red Olive’s Data Practice.


A journey from Football to Agile BI

Published: Mar 11th, 2016 | Author: Mark Fulgoni

So let me set the scene, like many children I grew up visiting my grandparents at weekends, they owned a FeverPitchpaper shop and as luck would have it, this paper shop was in Finsbury Park, North London. Now in those days whilst football players were stars, they also smoked ciggies and read newspapers and so the players from the local team were patrons of my grandparent’s shop… and so began my life-long obsession. In time, I grew up, got a job, left home, got married all the usual stuff, but through it all my obsession remained.

As with my choice of football team, I got lucky when it came to work, I was leaving school at a time when the computer industry was burgeoning, I got a job as a trainee programmer and that began another obsession, data; I became a geek!

I’ve forged myself a career working with data and over time I came to realise that I’d also become a data hoarder. As I’m not very good at remembering statistics, I collect data; if I play a game I note down my progress and use it to plan getting to the next level, reaching the next target, etc. Well I confessed to being a geek, didn’t I? Unsurprisingly this means I have accumulated a record of all my football team’s results stretching back to the 1980/1 season.

So where does Agile Business Intelligence come in to this I hear you ask?

Until recently mention “agile” in computing terms and you would have been talking about Agile Project Management, Kanban walls, burndown charts and scrum masters… These days however, “agile” when related to Business Intelligence means the ability to quickly wrangle (cleanse) and mash (join) together data, without having to go through lengthy and formal Extract, Transform, Load (ETL) iterations. Now my “bread and butter” is helping clients design and develop exactly the formal ETL that this new Agile BI says you can do without, so I thought that I should see what the hype was all about.

Choices

First things first, I needed to select an Agile BI toolset, there are plenty out there: Tableau, QlikView etc. but I plumbed for Microsoft’s Power BI, mostly because I’m already reasonably familiar with Microsoft technologies. So I kicked off the free download and started thinking about what data to play with… Well learning a new tool is always easier and more fun if you know the data you are using well and especially if it’s something that’s interesting to you… Oh, wait, I have all these years and years of football results, I wonder what they can tell me?

Wrangling the data

Being a self-confessed data hoarder and general geek my dataset was already in pretty good shape; for each fixture I had the date, opposition team, competition (Premiership, FA Cup, Champions League etc.), goals for and against and whether the game was played at home, away or at a neutral ground. With a little bit of hunting around on Google I was quickly able to add to this the city and country of all the various grounds and so I set to work.

I needed to wrangle my data to annotate competitions in order to categorise each as “domestic” or “European” and I needed to work out a formula to calculate the football “season” for any given date, but neither of those proved a problem.

Next I gave some thoughts to how often the team had won or lost games, this need to be calculated dynamically so that if I filtered data or summarised it I saw the correct value, Power BI’s DAX language easily allowed me to add calculated measures for win, loss and draw percentages.

And so the fruits of my labours…

So what does it tell me?

Well, my team wins more than half the games they have played and of the remainder they draw more often than they lose – which is good.

These stats aren’t quite so good in European matches as they are in Domestic ones, but the win percentage remains above 50%.

When I created the win, loss and draw percentages I created some calculated measure columns using the built-in DAX language, don’t let that worry you either. If you’ve created any formulas in Excel, you already know the basics of DAX.

As you can see I also managed to plot my team’s games on a map, but more on that next time where I’ll cover quite how easy that was, barring an odd assumption on my part and a “gotcha” from the Bing mapping engine. I’ll also include a link to the interactive report itself.

 

Mark Fulgoni is a Principal Consultant in Red Olive’s Data Practice.

 


Summary from Seminar: What can Analytics and Big Data Do for Your Business?

Published: Oct 13th, 2015 | Author: Jefferson Lynch

Jefferson opens the morning

Jefferson opens the morning

45 people from around 30 organisations joined Red Olive for our morning seminar on Predictive Analytics and Big Data on Thursday 8th October at the Royal Exchange in central London (Agenda here: http://www.red-olive.co.uk/2015/07/seminar-what-can-analytics-big-data-do-for-your-business).

 

 

 

Les King presenting on Big Data

Les King presenting on Big Data

Attendees heard Les King our guest from IBM start the morning with a presentation on the effects of Big Data in industries such as Retail, Energy and Utilities, Media and Healthcare.   There was plenty of opportunity to ask about practical experience and considerations such as the implications of data privacy laws.

 

 

 

Jefferson - evidence for effect of key value drivers on Customer Satisfaction

Jefferson – evidence for effect of key value drivers on Customer Satisfaction

John shows SPSS Modeler in practice

John shows SPSS Modeler in practice

After a short break attendees heard from Jefferson and John how Red Olive has been working with a wide variety of clients on a range of Predictive Analytical projects and there was opportunity to see how easy this can be in practice through a demonstration of SPSS Modeler.

 

You can download copies of the two presentations on Big Data and Predictive Analytics here:

What can Predictive Analytics do for your business?

What can Big Data do for your business?

You can download the videos here:

Part 1: Big Data: YouTube – Big Data recording

Part 2: Predictive Analytics: YouTube – Predictive Analytics recording


Seminar: What can Analytics & Big Data do for your Business?

Published: Jul 20th, 2015 | Author: Mark Fulgoni

RE Event Triple 01Are Big Data, predictive modelling and statistics increasingly mentioned in your organisation?

Perhaps you’re not sure what difference they could make to your business and you’re unclear where to start? Or maybe you’re a veteran with years of experience but you’d like to hear some new ideas to give you inspiration?

Then come along to this morning seminar at the Royal Exchange in central London on Thursday 8th October 2015, where you’ll hear about:

  • Novel ways to generate actionable insights from Big Data.
  • A practical approach to applying predictive modelling for benefit in your organisation.
  • A range of real business problems that have been transformed through the use of analytics.
  • Some of the most widely used modelling methods, and the problems each is best suited to.

See a summary of the event here and download the presentations:

http://www.red-olive.co.uk/2015/10/summary-from-seminar-what-can-analytics-and-big-data-do-for-your-business/

Why attend?

By the end of the session you’ll be better equipped to spot candidate problems to tackle in your own organisation, understand the potential benefits, and appreciate the areas where you’re most likely to need external help.

 Agenda:

 08:45 – 09:15  Registration and refreshments
 09:15 – 10:30  Big Data Insights; Les King (Director at IBM Analytics, North America)
 10:45 – 12:00  Analytics and Your Business; Jefferson Lynch (Director at Red Olive)
 12:00 – 12:15  Q&A

 

Registration:

If you have received an email invitation to this event please follow the registration instructions provided.

Are you interested in this event, but haven’t received an invitation?

Please email our event team at events@red-olive.co.uk

Speaker Profiles:

Les King Les King, IBM
Les has impacted the vision for IBM’s key Big Data, Analytics and Database strategies by acting as a trusted advisor both to clients and within IBM. He has held this post for nearly two years however; Les has 22 years of experience in Information Management. He is a globally recognised name across industries in this key space as a direct result of his balanced experience in both technical and business arms of the field. Les is passionate about helping clients understand how IBM can help satisfy their business needs and how the Information Management technology will strategically work to accomplish this. In this role Les is a respected advocate for clients as he works side by side development, sales and product management organizations ensuring that IBM’s IM portfolio is on a trajectory to meet the most compelling market demands, directly addressing the most pressing needs of IBM’s clients, both globally and locally.Les is also a part-time professor at Seneca College in Toronto in the Data Warehousing and DB2 concentration drawing on his previous experience teaching mathematics at the University of Toronto. He effectively draws a connection between what the marketplace demands and what the next wave of talented students need to know to be market ready, and internationally competitive in an increasingly high demand field. This is a perfect intersection of his international, technological, and market awareness to perpetuate the success of the next generation.
Jefferson_Lynch067
Jefferson Lynch, Red Olive
Jefferson has spent the last 20 years helping diverse companies such as Unilever, Telegraph Media Group, Bank of China, Centrica Energy, Home Retail Group, Thames Water, Novartis Pharmaceuticals and the NHS to extract value from their data. His experience spans the fields of Analytics, Data Mining and Data Management, from large corporate environments running teams of 50-60 people through to mid-size organisations with projects delivered in a few weeks.Jefferson’s specialties include information and data strategy, predictive and descriptive analytics, business performance improvement and more recently in the application of Big Data to improve customer relationships.Jefferson holds first class degrees in Physical Natural Sciences and Engineering from Cambridge University, and is a member of the Chartered Institute of Management Accountants.
John McConnell John McConnell, Red Olive
John has been delivering Analytical Consulting Services in a broad range of business and research areas for over 20 years. The type of projects he is involved in range from ad-hoc analyses through to multi-user high-end, automated, analytical solutions delivery with Statistical, Data Mining and Predictive Analytics methods and technologies. He is also a regular conference speaker on the role of advanced analytics in the context of both business and research.Through the ’90s he worked for SPSS in a variety of international Professional Services delivery and management roles. Since 2000 John has been involved in a number of ventures in Europe and North America which have applied advanced analytical methodologies. In 2004 he co-founded Applied Insights which specialised in the application of Advanced Digital Analytics. Applied Insights was acquired by Foviance in November 2008.John has a BSc in Mathematics, Statistics and Operational Research from the University of Manchester Institute of Science and Technology, UK.
20100813f Mark Fulgoni, Red Olive
Mark has 30 year of experience designing and developing data systems in the UK, Europe and South East Asia for a diverse range of companies such as the BBC, Blind Veterans UK, central government departments such as the DTI, Aon Insurance UK, Abacus International Pte, Thermo Fisher Scientific the International Transport Workers’ Federation and The City of London Police.Mark started his working life as an assembler programmer before moving into database solutions, his specialties now include data model and data warehouse design, a wide range of data and business consulting and management information systems. In addition he is a qualified trainer and project manager.Mark is a Chartered IT Professional with the BCS The Chartered Institute for IT.
 

Text analytics: what do the politicians really say (2)?

Published: Jul 9th, 2015 | Author: Jefferson Lynch

Text mining 8 - concept categoriesIn our last posting we introduced the main concepts of text mining and illustrated them using a customer service example from a telecoms company handling a complaint received through Facebook.  In this posting we consider a different application of text mining: analysing a large body of unstructured text.  In this case we have taken some sets of data from the UK government’s Hansard system, which captures the proceedings of the Houses of Parliament.  The example below is based on the speeches and questions of two different members of parliament:

  • Nicholas Soames, a Conservative member (right of centre) and former Minister of Defence.
  • Dennis Skinner, a longstanding Labour member (left of centre).

The various source files were loaded into SPSS Modeler’s text mining platform.  The data was parsed using Natural Language Processing (NLP) to identify prominent concepts (see previous posting) and then some basic analysis of these concepts was carried out.

Nicholas Soames concepts

Nicholas Soames concepts

Let’s start with Nicholas Soames.  The most commonly occurring concepts identified are listed below with “country” being the most frequent.  The concept “immigration” occurred 40 times and so this was expanded further.

 

A concept map was created centred on “immigration”. This shows the strength of association between two concepts. In the case of “immigration”, the strongest concept associations are with “defence”, “society” and “social”.

 

Text mining 6 - Dennis Skinner conceptsOne of the top concepts in Dennis Skinner’s comments is “pits”.  This is a good example of where understanding context is really important.  “pits” here means deep coal mines, relevant for jobs in his constituency.  To illustrate the context issue a little more, what would you understand by “tiger woods”?  A well-known golfer, or something about a large cat in a jungle?

Text mining 8 - concept categoriesUsing a domain dictionary, concepts can be grouped into particular categories.  So doing that with Nicholas Soames’ concepts and expanding further on his military concept, there seem to be particularly strong links between the categories “human resources”, “finance” and “geographical location” so if we go back to relevant original texts, linked below, we may expect to find the cost of having people in certain locations as a prominent theme.

Considering how your business could benefit from text analytics?  If you’d like to discuss how Red Olive can help you with your text mining goals, please contact us here or by calling us on +44 1256 831100.


Text analytics: what do the politicians really say (1)?

Published: Jun 9th, 2015 | Author: Jefferson Lynch

Does Kim need an intro?

Does Kim need an intro?

What is text analytics / text mining?

While most of our data mining work still relates to things that can be represented by numbers, an increasing amount also requires text mining using natural language processing (NLP).  But what does that mean and what’s involved?

Text mining involves applying analytics to the understanding of text.  Typically we are interested in identifying things like:

  • the number of times a particular concept (Kim Kardashian, terrorism, fraud… ) occurs in a particular body of text and what this may show us about the level of interest in certain subjects among the text authors,
  • the correlation between certain concepts in the text and what this may mean about the opinions of the text authors,
  • the sentiment of the words used and what this may show us about the attitudes of the text authors towards the subjects being discussed.

How is text mining carried out?

This diagram outlines the text mining process and illustrates it using the example of a large telecoms company scanning across social media feeds to identify incoming text feeds and classify them so they can be routed to the most relevant part of their organisation for action.

Data mining process

Data mining process

In this case the incoming message is from Facebook.  Two actions are applied to it, sentiment analysis to assess whether it is positive or negative and parsing using a “part of speech” (POS) tagger; these may be proprietary or open source depending on the software being used.  The tagger adds structure to the text and this is subsequently used during the analysis phase.  In this case the sentiment is strongly negative.

 

 

Parsing and sentiment analysis

Parsing and sentiment analysis

The next stage is “concept extraction” in which concepts are extracted from the tagged text.

A dictionary can then used to match a particular word or concept to a particular  subject deemed to be of interest, for example network, product…

 

 

 

Routing a message

Routing a message

These dictionaries are very domain-specific and it is in the iterative creation of a dictionary that text subject matter expertise is critical.  In this case the concept “signal” has been linked with the subject “network” and so this message can be routed to the network team within the organisation for action.

 

If you’d like to discuss how Red Olive can help you with your text mining goals, please contact us here or by calling us on +44 1256 831100.

In the next text mining posting we will look at a real example taken from publicly available data: the UK government’s Hansard data relating to two politicians, one from the governing Conservative party (right of centre) and one from the opposing Labour party (left of centre).


What can predictive analytics and statistics do for your business? Seminar, 27th May 2015

Published: Apr 17th, 2015 | Author: Jefferson Lynch

Should you attend?

Should you attend?

Are you increasingly hearing predictive modelling and statistics mentioned in your organisation?  Not sure what difference they could make to your business?  Unclear where to start? 

Then come along to this presentation on Wed 27th May in Guildford where you’ll hear about:

 

 

  • A practical approach to applying predictive modelling in your organisation.
  • A range of real business problems that have been significantly improved using modelling techniques.
  • Some of the most widely used modelling methods, and the types of problem each is best suited to. By the end of the session you should be able to spot candidate problems to tackle in your own organisation, understand the potential benefits, and appreciate the areas where you’re most likely to need help.

 

What will be covered?

What will be covered?

By the end of the session you will be able to spot candidate problems to tackle in your own organisation, understand the potential benefits, and appreciate the areas where you’re most likely to need help.

For more information please e-mail enquiries@red-olive.co.uk or call and leave a message on 01256 831100, we’ll contact you straight back.