Hi everyone!
So, today our Sesame database crashed today -- the .db file wouldn't open on restarting the server executable, and we had to shut down our three branches for 20 minutes while I restored a backup from 30 minutes prior, and then get the server set back up again. The biggest part of the 20 minute delay was while the first client application logged in and the server loaded our 1.2GB application -- while I watched it load up, an idea occurred to me on how I can make our system more stable, faster, and diversified as we continue to grow. I'd like to run this really long chain of thoughts by you all and see if this is possible.
Background -- our application has a CRM application, with currently 250,000 consumer records, 2 million note records and about 100,000 payment records tied to each consumer record. If I have 1-12 users logged in, everything runs smoothly and has no loss of performance. However, we run our application across three branches with up to 28 users currently. The Ubuntu server it runs on at 'head office' has 5GHz x 8 cpus, but due to the threading limit of a 32 bit Sesame program, it can only actively use 2 cpu cores at any given time. The biggest slowdown or risk to the database is when we run a mass update, import, or enter notes -- when the mass updates are run, they inevitably uses an xlookup command, which slows down everyone in all of our offices as it hogs up one or more cpu threads -- this problem compounds if it's someone remote to our 'head office' server, because of the delay in the internet connection vs. a local network.
So, my thought today is to set up a master database in our one branch, and a second identical (for the moment) copy in our other offices. Each office can work independently on their database, running on their own server, and then set up a series of synch functions or commands that can push the data from a mirror database back to the master database. Each office can work independently on their mirror database, running on their own server, and regularly copy over new notes, statuses, payments, and so on.
If one server went down (like today), a single branch would be disabled temporarily, but the others wouldn't notice. As well, if someone ran a (local) mass updated function, it would only affect up to the 12-15 people in their local branch, and eliminate wide area network/internet slowdowns.
To make this work, I'm thinking these are my hurdles -- can anyone make suggestions?
1) To be able to synch or coordinate new records, I need to keep an ironclad single set of global values that would track a single set of reference IDs for new notes, payments, consumer records, and so on. We add currently about 25 note records a minute at peak times, so it needs to access global values, increment them up, and not duplicate numbers, even if there is a near-simultaneous request for a new global value -- should I use a text file that is overwritten as new notes are created, should I use @alternateserver to go to the 'master' database (which may cause slowdowns when everyone wants a new number all at once), should I set up a tiny database for @alternateserver to use that only stores global values, or some other option?
2) Copying from database B or database C to database A is fairly simple, I think -- I would create a 'synch' button that would use @alternateserver and xlookup to go through records in the master database A in the master database A and see if the records exist -- if not, user @alternateserver and xresultset to create those records. Agreed?
3) If I can build an on-the-fly synch function, and I can build that into a command button or what have you, that's great for someone refreshing the master database before running a critical report, but can I create a scheduled synch for say 2am?
4) Rather than synch back from database A to database B and C, it would be better to set up a simple method after synch to copy database A back over database B and C -- is there a non-manual way I could force a copy down to the branch offices, and force their local sesame client app to shut down first, or un-load their file in their server app? Am I thinking this into something more complicated than it should be?
I realize these ideas and questions are rough as I 'think out loud', but I think this is the best way to keep our company growing -- at our current rate, we'll have 5 branch offices by the end of 2016, and 8 by the end of 2017. If I build this properly now, it won't matter if we have 3 million consumer records and 8 million related note records, the only thing that will affect speed is the number of local users' activities in the branch.
Thoughts? Comments?
Thanks! Blair
|