Designing Scalable IoT Solutions on AWS
Building and implementing scalable IoT solutions on AWS's platform enables you to focus on core business needs without the hassle of infrastructure management and monitoring.
The Internet of Things (IoT) presents an unparalleled opportunity for every industry to address their business challenges. With the proliferation of devices, one needs a solution to connect, collect, store and analyze the device data. Amazon Web Services (AWS) provides a range of managed IoT cloud services and toolkits that enable providers to design and build scalable solutions.
Designing IoT solutions natively on the AWS platform, or migrating to it, enables you to focus on your core business without the hassle of low-level infrastructure management and trying to string together a hodgepodge of propriety IoT system management platforms. This will ensure high availability to your customers and a solid AWS toolkit at your disposal to service them.
Design to Operate at Scale Reliably
IoT systems must handle near real-time, often high-volume, streaming data from devices and gateways. As your customer scales out their system, your solution and the underlying infrastructure needs to scale effortlessly alongside their business.
The best approach is to publish inbound data to a message queue, then load it into a real-time, in-memory cache buffer layer before writing it to longer-term storage databases. This helps to achieve real-time events and to slow down the data insertion rate to reduce the chances of overwhelming your database writer. Caches can also save money because you can batch your insertions and pipe data elsewhere quickly without needing constant use of your database’s read and write quotas.
On AWS, your devices can publish data to AWS Kinesis, or AWS IoT Rules can be used to forward data to AWS SQS and Kinesis to store it in time-series stores like AWS S3, Redshift or DataLake. These data stores can be used to generate custom dashboards or AWS QuickSight dashboards to enable easy streaming data monitoring.
Route Large Data Volumes Through Data Pipelines
Consuming incoming data from device topics directly to a single service prevents systems from achieving full scalability. Sometimes, such an approach limits the availability of the system on events of failure and data flood.
AWS IoT Rules Engine is designed to connect API endpoints to AWS IoT Core in a flexible way. But, all AWS services have different data flow properties and their own intended use cases. All services cannot be used as a single point of entry to the system. Sometimes improper use of an AWS IoT service can open potentially disastrous weak-points in your solution, or at least brittle components that won’t scale. Pay attention to the data flow limitations and suggested uses for each AWS IoT service and its pipeline connectors.
For example, in cases of high-volume streaming data, consider buffering (ElastiCache) or queuing (SQS) the incoming data before invoking other services, which enables the ability to recover from failures, increase data availability and reduce costs.
AWS IoT Rules Engine allows the triggering of multiple AWS services like Lambda, S3, Kinesis, SQS or SNS in parallel. Once data is captured by an IoT system, it then enables AWS endpoints (other AWS services) to process and transform data. This enables you to store data into multiple data stores simultaneously.
The best and most secure way to ensure all data is processed and stored is to redirect all device topics data to an SNS which is designed to handle data flood processing, ensuring that incoming-data is reliably maintained, processed and delivered to the proper message channel. To make it more scalable, multiple SNS topics, SQS queues and Lambdas for a different group of AWS device topics can be used. You should consider storing the data in safe-storage like a Queue, Amazon Kinesis, Amazon S3 or Amazon Redshift before processing. This practice ensures no data loss due to message floods, unwanted exception code or deployment issues.
Automate Device Provisioning and Upgrades
As your customer’s IoT-enabled business grows and their solution scales along with it, manual processes such as device provisioning, bootstrapping the software, security configuration, rule-actions setup, device OTA upgrades, etc. will quickly become infeasible. Minimizing human interaction in the initialization and upgrades processes is important to save time and money for both providers and customers.
Designing your IoT devices with built-in (and preferably automated) provisioning and registration—and leveraging the proper AWS tools for device provisioning and management—will enable your IoT solutions to achieve the desired operational efficiencies with minimal human intervention.
AWS IoT provides a set of functionalities that can be used for batch import with a set of policies that can be integrated with dashboard or manufacturing processes where a device can be pre-registered to AWS IoT and certificates can be installed on the device. Later on, device provisioning workflows can claim that device and attach it with a collection, a user or any other permissions entity. AWS provides the ability to trigger and track OTA upgrades for devices. Given that IoT is so emergent and quick iterations on-device firmware are commonplace, being able to manage OTA update management on the same platform that runs your IoT solution is a huge win for both solutions providers and customers.
Adopt Scalable Architecture for Custom Components
The scope and potential impacts of an IoT solution don’t end by connecting devices to the internet and handling their reports. Think about adopting the latest analytics techniques to make sense of all the data you’re generating. Consider creating integrations for Google Home, Alexa, and so forth, to extend the reach of your solution and to create more endpoints for customer interaction. The architecture of an IoT solution should ensure that the external components can be easily integrated into the solution without any performance bottlenecks.
Check for Offline Access and Processing
Sometimes it’s unnecessary to process all your data in the cloud. In many cases, there’s no continuous or reliable internet connectivity available. For such a scenario, add AWS Greengrass at the edge of your networks. Greengrass processes and filters data locally—that is, on your IoT devices or gateways—and reduces the need to send all device data upstream for analysis. You can capture all the data, process it on the device or on a gateway that receives reports from a collection of devices, and then send processed data or error events up to the cloud, either by a set of rules or upon request. If there’s a need for time-series data, then one can schedule a periodic process that sends device data to the cloud to be used for future enhancements like AWS Machine Learning models or cloud analytics tools.
Choose the Right Data Storage Option
IoT systems generate high-speed, high-volume and diverse data. Each IoT device or device topic can have different formats, which may not be manageable through a single database or a similar type of data-store. An architect should be careful while choosing database format and data-store. Think about both read and write requirements—not just now, but in a year, two years and five years. Sometimes a single data store works fine, or hybrid data-store for a different purpose helps to achieve high throughput. Frequently used static data can be stored in ElastiCache, which helps to improve performance. Such practices help achieve scalability and maintainability of the system.
Filter and Transform Data before Processing
All incoming data will probably require processing or transforming—even if some of the heavy liftings is delegated to edge processors—after which it can be redirected to storage. AWS IoT Rules enables you to redirect messages to different AWS services. An architect should think about data in all of its many forms, i.e. processing-needed, ignored/static data (like config) and direct storage.
AWS IoT helps achieve quick device connectivity, secure data ingesting, easy device management, multi-protocol support and much more.