How Big Data Can Impact the Healthcare of Pro Athletes

How Big Data Can Impact the Healthcare of Pro Athletes

Professional Football is a unique business in that its most important assets are put in harm’s way as a function of their job.  The NFL has made progress in terms of protecting players from the hazards of playing football but the day-to-day trauma that is sustained on the field is often missed. Due to the many technological breakthroughs achieved by the wireless industry, a new world of wearable physiological status monitoring sensors exist. For example, students at Western Michigan University (WMU) have designed helmet sensors to track the accumulation of trauma incurred and at what point critical trauma thresholds have been exceeded. Players may sustain “stealth injuries” where there are no obvious signs of long-term damage.

Hadoop Basics

With a typical sensor producing a significant amount of data per game and per season, a traditional relational database is not suited to efficiently handle the volume, velocity, and variety of the data. The Hadoop framework was specifically designed to handle problems of this nature. Hadoop can simultaneously store and process large scale unstructured data-sets to provide real-time and meaningful results.

At its core, Hadoop is made up of two separate but interdependent components: Hadoop Distributed File System (HDFS) and MapReduce.  HDFS is Hadoop’s storage layer which provides a redundant and distributed file system for storing unstructured data.  MapReduce processes and makes sense of large data sets stored on a Hadoop cluster. MapReduce encompasses all data processing elements of the Hadoop system, while HDFS encompasses all storage elements.

Master and Worker Nodes

A typical Hadoop deployment consists of “master” and “worker” nodes which sit on separate computers. The master nodes handle coordination and task delegation, while the worker nodes handle processing and raw storage. If workloads increase, the system can be easily and seamlessly scaled linearly by adding additional worker nodes to the Hadoop cluster. NameNode, a process that runs on the MasterNode, handles the storage of meta-data describing the data on the worker nodes. Also on the MasterNode is the job tracker, which, as its name implies, keeps track of all tasks and processes doled out to the worker nodes. The worker nodes handle the nitty-gritty data processing of the distributed tasks using a daemon called TaskTracker. To handle the data reads and writes, the workers use the DataNode process. Working together, the master and worker nodes are able to store and process massive amounts of data efficiently and in real-time.

How Hadoop Can Help Football Players

Using Hadoop in conjunction with the plethora of live streaming data coming from wearable sensors enables real-time processing of massive amounts of sensor data to diagnose, predict, and treat medical issues that may fail to be diagnosed by diligent team doctors and trainers. The sensors being designed by WMU can stream head trauma data wirelessly to a Hadoop cluster which can perform real-time analyses of all players in a field. New methods of diagnoses for head trauma can be uncovered by feeding the data into statistical algorithms that can predict future health issues based on the massive amounts of historical data collected. This can lead to improving the health of football players, both during their active years and in retirement.

Currently, similar sensors are being designed to measure pulse and breathing rates for soldiers, firefighters, and police officers. With Hadoop, these disparate datasets can be easily fused to automatically gain insights into individual health and healthcare on a macro level.

Source: http://www.mlive.com/business/west-michigan/index.ssf/2014/07/helmet_sensors_that_detect_con.html

Tags:

Write a Comment

x

Contact Us Close