[Kea-users] KEA instances with shared database

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

[Kea-users] KEA instances with shared database

Emile Swarts
Hi,

I've looked at implementing multiple KEA instances with a shared database.
I realise that this current setup will hand out overlapping IPs because the leases are stored in memory not in the database.

I'm in the process of adding KEA HA, but just wanted to confirm that there is no way to force a lookup in the database to find the next available IP before committing to this approach.

Ultimately looking to introduce AWS auto scaling to accommodate the load. Any advice would be appreciated.

Regards,
Emile

_______________________________________________
ISC funds the development of this software with paid support subscriptions. Contact us at https://www.isc.org/contact/ for more information.

To unsubscribe visit https://lists.isc.org/mailman/listinfo/kea-users.

Kea-users mailing list
[hidden email]
https://lists.isc.org/mailman/listinfo/kea-users
Reply | Threaded
Open this post in threaded view
|

Re: [Kea-users] KEA instances with shared database

Tomek Mrugalski-2
On 11.12.2020 18:12, Emile Swarts wrote:
> I've looked at implementing multiple KEA instances with a shared database.
> I realise that this current setup will hand out overlapping IPs because
> the leases are stored in memory not in the database.
Where did you get this impression from? This is mostly incorrect. If you
configure Kea to use a database to store leases, Kea will never cache
anything in memory and will always do a DB lookup before assigning
anything. The only way I can think of making Kea to keep the leases
"stored in memory" would be using memfile as lease backend, but then Kea
would never look up leases in a DB.

> I'm in the process of adding KEA HA, but just wanted to confirm that
> there is no way to force a lookup in the database to find the next
> available IP before committing to this approach.
Kea doesn't work the way you think it is. Kea always looks up the lease
database and never keeps any memory cache of it.

Tomek
_______________________________________________
ISC funds the development of this software with paid support subscriptions. Contact us at https://www.isc.org/contact/ for more information.

To unsubscribe visit https://lists.isc.org/mailman/listinfo/kea-users.

Kea-users mailing list
[hidden email]
https://lists.isc.org/mailman/listinfo/kea-users
Reply | Threaded
Open this post in threaded view
|

Re: [Kea-users] KEA instances with shared database

Emile Swarts
Hi, 

Thanks for getting back to me. My assumptions about the in-memory state was largely based on this conversation:

Also point 4 on this page states: (https://kb.isc.org/docs/kea-performance-optimization)

"Avoid shared lease backends. When multiple Kea servers share a single lease backend (e.g. with a cluster of databases serving as the lease backend with multiple Kea instances sharing the same pools of addresses for allocation), they will run into contention for assigning addresses. Multiple Kea instances will attempt to assign the next available address; only the first one will succeed and the others will have to retry."

The design I'm trying to achieve:

1. Multiple KEA instances (AWS ECS Fargate) sitting behind an AWS Network Load Balancer, sharing a single mysql backend
2. Horizontal scaling of these instances up and down to accommodate load
3. All instances are provisioned with the same configuration file (pools, subnets .etc)
4. Zero downtime deployments by removing instances and having the load balancer redirect traffic to remaining instances

My concerns are mainly around race conditions and the iterator to find the next available IP to hand out.
Does it sound like the above could be achieved? 

Regards,
Emile

On Fri, Dec 11, 2020 at 7:11 PM Tomek Mrugalski <[hidden email]> wrote:
On 11.12.2020 18:12, Emile Swarts wrote:
> I've looked at implementing multiple KEA instances with a shared database.
> I realise that this current setup will hand out overlapping IPs because
> the leases are stored in memory not in the database.
Where did you get this impression from? This is mostly incorrect. If you
configure Kea to use a database to store leases, Kea will never cache
anything in memory and will always do a DB lookup before assigning
anything. The only way I can think of making Kea to keep the leases
"stored in memory" would be using memfile as lease backend, but then Kea
would never look up leases in a DB.

> I'm in the process of adding KEA HA, but just wanted to confirm that
> there is no way to force a lookup in the database to find the next
> available IP before committing to this approach.
Kea doesn't work the way you think it is. Kea always looks up the lease
database and never keeps any memory cache of it.

Tomek
_______________________________________________
ISC funds the development of this software with paid support subscriptions. Contact us at https://www.isc.org/contact/ for more information.

To unsubscribe visit https://lists.isc.org/mailman/listinfo/kea-users.

Kea-users mailing list
[hidden email]
https://lists.isc.org/mailman/listinfo/kea-users


--
Emile Swarts
Software Engineer
 
 
www.madetech.com
 
twitter.com/madetech

_______________________________________________
ISC funds the development of this software with paid support subscriptions. Contact us at https://www.isc.org/contact/ for more information.

To unsubscribe visit https://lists.isc.org/mailman/listinfo/kea-users.

Kea-users mailing list
[hidden email]
https://lists.isc.org/mailman/listinfo/kea-users
Reply | Threaded
Open this post in threaded view
|

Re: [Kea-users] KEA instances with shared database

Tomek Mrugalski-2
On 12.12.2020 10:55, Emile Swarts wrote:
Thanks for getting back to me. My assumptions about the in-memory state was largely based on this conversation:

Also point 4 on this page states: (https://kb.isc.org/docs/kea-performance-optimization)

"Avoid shared lease backends. When multiple Kea servers share a single lease backend (e.g. with a cluster of databases serving as the lease backend with multiple Kea instances sharing the same pools of addresses for allocation), they will run into contention for assigning addresses. Multiple Kea instances will attempt to assign the next available address; only the first one will succeed and the others will have to retry."

Ah, that makes sense. That comment was in the context of incorrect statistics. A lot has changed since 2017. The underlying problem in that discussion (two Kea instances getting invalid statistics) is now solved. The problem there was that each instance set allocated addresses statistic at start-up and only increased or decreased it based on its own allocation. This was causing weird problems with statistics, reporting inaccurate data, such as negative number of allocated addresses. This was only statistic reporting problem. It was fixed a while ago with the stat_cmds hook. Another aspect that changed is that Kea is now multi-threaded.


The design I'm trying to achieve:

1. Multiple KEA instances (AWS ECS Fargate) sitting behind an AWS Network Load Balancer, sharing a single mysql backend
2. Horizontal scaling of these instances up and down to accommodate load
3. All instances are provisioned with the same configuration file (pools, subnets .etc)
4. Zero downtime deployments by removing instances and having the load balancer redirect traffic to remaining instances

My concerns are mainly around race conditions and the iterator to find the next available IP to hand out.
Does it sound like the above could be achieved?

In principle, Kea does this when needs to assign a new lease:

1. pick the next address as a candidate, check if it's available.

2. if it's not available, go back to step 1.

3. If the address is available (no lease, or expired lease that can be reused), attempt to insert a lease

4. If the lease insertion succeeds, great! We're done.

5. If the lease insertion fails (because some other instance took it first), Kea instance understands it lost a race and moves back to step 1.

That last step should solve many problems. If there's a race and one of the instances loses, it will repeat. This is somewhat inefficient, but in return there's the possibility to set up arbitrary number of instances.

Now, you need to look at your "single mysql backend". The setup you described will protect against Kea failures, but what about mysql service failure? Is this a single point of failure? If yes, is this acceptable risk for you? If this is a cluster, make sure that 2 Kea instances connected to different nodes are not able to allocate the same address. Whether this is possible or not depends on the cluster and how it ensures consistency. Am afraid I don't have enough experience with clusters to be more specific here. Sorry.

In any case, I'd be very interested in the results you'll get with this setup. Feel free to share on or off the list.

Thanks and good luck,

Tomek



_______________________________________________
ISC funds the development of this software with paid support subscriptions. Contact us at https://www.isc.org/contact/ for more information.

To unsubscribe visit https://lists.isc.org/mailman/listinfo/kea-users.

Kea-users mailing list
[hidden email]
https://lists.isc.org/mailman/listinfo/kea-users