Hot Posts

6/recent/ticker-posts

Ad Code

The hidden challenges of serverless functions


Serverless Functions Are Great for Small Tasks 

Cloud-based computing using serverless functions has gained widespread popularity. Their appeal for implementing new functionality derives from the simplicity of serverless computing. You can use a serverless function to analyse an incoming photo or process an event from an IoT device. It’s fast, simple, and scalable. You don’t have to allocate and maintain computing resources – you just deploy application code. The major cloud vendors, including AWSMicrosoft, and Google, all offer serverless functions. 

For simple or ad hoc applications, serverless functions make a lot of sense. But are they appropriate for complex workflows that read and update persisted, mission-critical data sets? Consider an airline that manages thousands of flights every day. Scalable, NO-SQL data stores (like Amazon Dynamo DB or Azure Cosmos DB) can store data describing flights, passengers, bags, gate assignments, pilot scheduling, and more. While serverless functions can access these data stores to process events, such as flight cancellations and passenger rebookings, are they the best way to implement the high volumes of event processing that airlines rely on?

Issues and Limitations 

The very strength of serverless functions, namely that they are serverless, creates a built-in limitation. By their nature, they require overhead to allocate computing resources when invoked. Also, they are stateless and must retrieve data from external data stores. This further slows them down. They cannot take advantage of local, in-memory caching to avoid data motion; data must always flow over the cloud’s network to where a serverless function runs. 

When building large systems, serverless functions also do not offer a clear software architecture for implementing complex workflows. Developers need to enforce a clean ‘separation of concerns’ in the code that each function runs. When creating multiple serverless functions, it’s easy to fall into the trap of duplicating functionality and evolving a complex, unmanageable code base. Also, serverless functions can generate unusual exceptions, such as timeouts and quota limits, which must be handled by application logic.

An Alternative: Move the Code to the Data

We can avoid the limitations of serverless functions by doing the opposite: moving the code to the data. Consider using scalable in-memory computing to run the code implemented by serverless functions. In-memory computing stores objects in primary memory distributed across a cluster of servers. It can invoke functions on these objects by receiving messages. It also can retrieve data and persist changes to data stores, such as NO-SQL stores.

Instead of defining a serverless function that operates on remotely stored data, we can just send a message to an object held in an in-memory computing platform to perform the function. This approach speeds up processing by avoiding the need to repeatedly access a data store, which reduces the amount of data that has to flow over the network. Because in-memory data computing is highly scalable, it can handle very large workloads involving vast numbers of objects. Also, highly available message-processing avoids the need for application code to handle environment exceptions.

In-memory computing offers key benefits for structuring code that defines complex workflows by combining the strengths of data-structure stores, like Redis, and actor models. Unlike a serverless function, an in-memory data grid can restrict processing on objects to methods defined by their data types. This helps developers avoid deploying duplicate code in multiple serverless functions. It also avoids the need to implement object locking, which can be problematic for persistent data stores.

Benchmarking Example

To measure the performance differences between serverless functions and in-memory computing, we compared a simple workflow implemented with AWS Lambda functions to the same workflow built using ScaleOut Digital Twins, a scalable, in-memory computing architecture. This workflow represented the event processing that an airline might use to cancel a flight and rebook all passengers on other flights. It used two data types, flight and passenger objects, and stored all instances in Dynamo DB. An event controller triggered cancellation for a group of flights and measured the time required to complete all rebookings.

In the serverless implementation, the event controller triggered a lambda function to cancel each flight. Each ‘passenger lambda’ rebooked a passenger by selecting a different flight and updating the passenger’s information. It then triggered serverless functions that confirmed removal from the original flight and added the passenger to the new flight. These functions required the use of locking to synchronise access to Dynamo DB objects.

The digital twin implementation dynamically created in-memory objects for all flights and passengers when these objects were accessed from Dynamo DB. Flight objects received cancellation messages from the event controller and sent messages to passenger digital twin objects. The passenger digital twins rebooked themselves by selecting a different flight and sending messages to both the old and new flights. Application code did not need to use locking, and the in-memory platform automatically persisted updates back to Dynamo DB.

Performance measurements showed that the digital twins processed 25 flight cancellations with 100 passengers per flight more than 11X faster than serverless functions. We could not scale serverless functions to run the target workload of canceling 250 flights with 250 passengers each, but ScaleOut Digital Twins had no difficulty processing double this target workload with 500 flights.

Summing Up

While serverless functions are highly suitable for small and ad hoc applications, they may not be the best choice when building complex workflows that must manage many data objects and scale to handle large workloads. Moving the code to the data with in-memory computing may be a better choice. It boosts performance by minimising data motion, and it delivers high scalability. It also simplifies application design by taking advantage of structured access to data.

To learn more about ScaleOut Digital Twins and test this approach to managing data objects in complex workflows, visit: https://ift.tt/PA8CJ01.



from Cloud Computing – My Blog https://ift.tt/oDS36QR
via IFTTT

Post a Comment

0 Comments

Ad Code