I once took visitors to see the Grand Canyon only to find it filled with clouds. Now we find low-lying data clouds, too. The recent ARC post The Power of the Industrial Edge reminds us that with increasing numbers of sensors there will be increasing amounts of data. Transmitted over PROFINET, of course, I would add. So should ALL this data go to the cloud for storage and analysis? Maybe not. (Remember, “cloud” just means the computer is offsite maintained and backed up by offsite experts.)
Some data is best dealt with locally. Now when the Grand Canyon fills with what I called low-lying clouds, it is really described as fog. As it turns out cloud-like storage and analytics kept locally is also called fog, fog computing. So, which should you use? I often say there is never a substitute for doing the engineering. This is no different. Analyze what data is best dealt with locally… and by locally, I mean at the machine. Odds are that you will find some analysis is best done at the machine but that other data applies to the whole production line or the whole plant.
So clearly you’ll need clouds and fog to keep your data in its proper place.
UPDATE: Opto 22 has a blog post differentiating Fog computing from Edge computing: Fog Computing vs. Edge Computing.