How to implement idempotence in Webhooks
Asaas webhooks guarantee that events will be sent at least once, that is, they follow the "at least once" premise. This means that your endpoint may occasionally receive the same webhook event repeatedly in some sporadic situations. For example, in a situation where Asaas does not receive a response from your endpoint.
That said, ideally your application knows how to handle events received with duplicity using idempotence and this article aims to explain how idempotence works and how you can protect your application.
What is idempotence?
Idempotence refers to the ability of an operation (function) to constantly return the same result regardless of the number of times it can be executed, as long as the parameters always remain the same.
Bringing it to the webhook context, if Asaas occasionally sends the same webhook twice, ideally your application should respond to both requests with HTTP Status 200
, always maintaining the same return as the first request received.
Why use idempotence?
Before we explain why we use idempotence, let's analyze the main HTTP\ verbs: GET
, PUT
, DELETE
and POST
.
Applying REST patterns correctly in your application, the verbs GET
, PUT
and DELETE
will always be idempotent:
GET
is a query verb that does not change the state of the resource.PUT
, if executed several times with the same parameters, will always return the same result.DELETE
in the first request makes the resource state “deleted”, even if otherDELETE
requests are sent, the resource state will remain the same.
However, the POST
verb is the only one of the HTTPs verbs that does not have idempotency behavior by default:
POST
can create a new unique resource each time the operation is performed.
The webhooks that are triggered by Asaas, by default, use the POST
verb and that is why it is important that your application applies the concept of idempotence so that the receipt of repeated webhooks does not interfere with the logic applied by your system.
Idempotence strategies
Events sent by Asaas Webhooks have unique IDs, and even if they are sent more than once, you will always receive the same ID. One of the strategies is to create an event queue in your database and use this ID as a unique key, this way you will not be able to save two identical IDs
CREATE TABLE asaas_events (
id bigint PRIMARY KEY,
asaas_event_id text UNIQUE NOT NULL,
JSON payload NOT NULL,
status ENUM('PENDING','DONE') NOT NULL
[...]
);
The recommended thing is that when you receive the Asaas event in your application, you save this information in a table as shown above and reply 200 to Asaas to indicate successful receipt. Remember to return 200 only after confirming the persistence of the event in your database table, as we do not guarantee that this event will be resent automatically.
After that, create a processing routine, such as Cron Jobs or Workers, to process the persisted and unprocessed events (status = PENDING
), as soon as processing is complete, mark them with the status DONE
or simply remove the record from the table. If the order of events is important to your system, remember to fetch and process them in ascending order.
const express = require('express');
const app = express();
app.post('/asaas/webhooks/payments', express.json({type: 'application/json'}), (request, response) => {
const body = request.body;
const eventId = body.id;
const eventType = body.event;
const payload = body; // Save the entire payload to check the "event" in processing
const status = "PENDING";
await client
.query("INSERT INTO asaas_events (asaas_event_id, payload, status) VALUES ($1, $2, $3)", [eventId, payload, status])
.catch((e) => {
// PostgreSQL code for unique violation
if (e.code == "23505") {
response.json({received: true});
return;
}
throw e;
});
// Return a response to say that the webhook was received
response.json({received: true});
});
app.listen(8000, () => console.log('Running on port 8000'));
If your system receives more than hundreds of thousands of events per day, it is recommended to use a more robust queuing solution, such as Amazon SQS, RabbitMQ or Kafka.
In this solution, in addition to resolving the point of idempotence, the suggestion is also that the processing of events is asynchronous, thus having a faster response for Asaas and a greater throughput of the queue of sent events.
Another common strategy is to perform Webhooks processing and save the ID of each event in a table.
CREATE TABLE asaas_processed_webhooks (
id bigint PRIMARY KEY,
asaas_evt_id text UNIQUE NOT NULL,
[...]
);
This way you can always check this table when you receive a new event and see if the ID has already been processed previously.
const express = require('express');
const app = express();
app.post('/asaas/webhooks/payments', express.json({type: 'application/json'}), (request, response) => {
const body = request.body;
const eventId = body.id;
await client
.query("INSERT INTO asaas_processed_webhooks (asaas_evt_id) VALUES $1", [eventId])
.catch((e) => {
// PostgreSQL code for unique violation
if (e.code == "23505") {
response.json({received: true});
return;
}
throw e;
});
switch (body.event) {
case 'PAYMENT_CREATED':
const payment = body.payment;
createPayment(payment);
break;
// ... handle other events
default:
console.log(`This event is not accepted ${body.event}`);
}
// Return a response to say that the webhook was received
response.json({received: true});
});
app.listen(8000, () => console.log('Running on port 8000'));
In this solution, the table is used as a check after processing, which is done within the 10s timeout limit that Asaas has for the request.
Updated 8 days ago