If you are just joining us, this series started a few weeks ago. If you are looking for deeper explanations and test results you probably want to start at the beginning. Otherwise, read on for the high level recap and our recommendations.
In this series we set out to answer the question, “Is FileMaker ACID compliant?” We were specifically interested in the client-server deployment model. Even though, ACID evaluation is typically performed on SQL-based databases, the principles can be used to evaluate and understand just about any database. The tests that were performed revealed valuable technical details that can help us deliver even more robust and effective FileMaker based solutions. Here is a quick recap:
A – Atomicity: Yes
Activities important to an institution can be encapsulated into a transaction, where the entire unit is atomically committed into the live system. This can be accomplished with the judicious use of the relationship graph, Get ( LastError ), Commit Records, and Revert Record. Both simple and complex transactions can be performed in a “scripted” model that don’t require interface elements with the exception of record deletion.
C – Consistency: Yes
FileMaker supports numerous field validation capabilities along with flexible security controls, many of which can be calculated at runtime. Stored calculation fields provide additional capability in this area. It is possible to support complex consistency requirements such as foreign key constraints and cascading deletes. Because of FileMaker’s unified development stack, additional validation and user facing prompts are also supported. These user facing validation options, are helpful, but can be tricky to configure harmoniously with internal consistency rules.
I – Isolation: Yes
The core isolation capability in FileMaker comes from record level locking. No record may ever be open for editing by more then one user context. If two users collide, one will be granted the lock and the other will receive an error which developers need to handle using Get ( LastError ) and potentially a Revert Record step. Unlike some other databases, FileMaker’s automatic propagation of record updates into client caches, similar to data binding, does have certain side effects which developers need to be conscious of.
D – Durability: No
Arguably one of the more difficult properties to deliver on, FileMaker’s server product does not claim or demonstrate the ACID durability property. There is some risk of data loss and file corruption during a server crash, a crash being anytime the database server process is terminated abruptly. This would be caused by a power failure on up the hierarchy to the operating system or even a glitch in the process itself. FileMaker server does provide ample backup tools so that an administrator can restore known good files after a crash. FileMaker also offers “consistency checks” and “recovery” capabilities which are, in most cases, able to repair files that were not closed properly. However, atomicity and consistency are not necessarily preserved when any type of repair is necessary.
Network Durability*: Yes
FileMaker’s processing model is different than some other systems, in that transaction steps and various queries are actually performed on the client instead of the server. This is slightly outside the normal discussion of ACID compliance, but it seemed important to test, especially considering the irregularities common to WiFi, WAN, and cellular based clients. During active phases of the durability tests, the network between client and server was intentionally severed. FileMaker server performed correctly by ignoring edits before a commit, and preserving all changes after a commit, preserving atomicity and consistency through these outages.
Summary
In summary, FileMaker does demonstrate 3 out of the 4 ACID properties. Taking advantage of these capabilities does require a certain amount of practice and intentionality. Even an experienced developer may opt-out of certain ACID behaviors to improve system performance, or simply to reduce development cost for the solution in nonessential areas. Problems following a server crash are the most problematic, so care should be taken to minimize this possibility along with an appropriate backup strategy. It also bears noting that the server does maintain database integrity during client network outages.
Recognizing there are limitations to the product should be taken with a grain of salt. Every organization faces unique challenges with whatever database platform they use. No matter what platform you are on, being aware of limitations is essential so that you can be prepared to deliver value and continuity to the business. FileMaker continues to be a great platform and value for many solutions and environments.
Best Practices
Zoom Out
- Assess your business and solution. How much value is in your data? Could you measure the quality? If the nature of your business imports data from other erroneous sources, then how much effort should be budgeted for cleaning up, validating, and preserving data? – The point here, is to not over-engineer the solution. Don’t bankrupt your lemonade stand, coding like you work for the NSA.
- What would it cost if your server and all backups were destroyed in a fire or flood? Would your company face labor or opportunity costs if the server was down for 1 business hour? … for 1 business day? Considering potential cost, how much would you be willing to invest/pay to cut your risk in half? How long does it take you to fully restore the server from a backup? – At least every 12 months, try to have a discussion with company management about disaster preparedness. Risks related to a server crash are just one bullet point in a bigger discussion. Transparency about risk will typically lead to an increased budget or else an acknowledgment of acceptable risk.
- Keep things in perspective. MySQL, a hugely popular database, did not default to using ACID compliant table storage until 2010, and that storage engine wasn’t enabled for full text search until 2013. Mountains of good data are effectively stored in various systems around the globe which are not currently ACID compliant. It would be great to see FileMaker improve in this area, but I see it as just one of many factors. Be educated about your tool’s strengths and weaknesses. Mitigate risks with reasonable effort.
Scripting
- Identify and prioritize vital business transactions and document them, I’d suggest using a wiki. Maybe sales invoicing or inventory is one of your most vital areas with regard to performance, accuracy, and security. Identify the indivisible core for each activity in that area and write a transactional script that just does that one activity. With this type of effort, you can likely resolve any concern related to data integrity in this area. 90% of your business processes likely don’t need this same kind of investment.
- Learn to use error trapping with Get ( Last Error ) frequently, such that you develop a comfortable habit using it. There are 3 areas that are most likely to error in your transactional script: 1 – Get into the correct context. Verify that you are on the correct layout, browse mode, and record. 2 – Any time a record is first opened, check that you were able to acquire the lock. For multi-record transactions this will happen once for each record. 3 – The commit can also fail (typically because of validation). Be prepared to Revert the transaction when you can’t recover from an error.
- Consider approaching problems with a tiered development model. You can apply the separation of concerns to any new feature or capability you develop. Developing a script that handles a business transaction, independent of a script that handles user interface, can often clarify the job at hand, reduce software bugs, simplify testing, and produce reusable and portable modules.
- Transactional scripts can protect you from half finished tasks that could occur when network connectivity is lost. This might be especially important if your solution includes mobile users on WiFi or even on iOS/cellular. Forwarding certain transactions to the server with PSoS might be even wiser.
- Develop a habit of logging important activity. If possible, put that log table into it’s own file. Maybe you record an item’s inventory level just before and after shipping an item. While there is a real risk for a small amount of data to be lost during a server crash, logging should help you to know what was happening right around that time. It can point you to records that should be verified if FileMaker, after restart, reports the file was not closed properly.
- Nuances with client-server record locks and caching could lead to naive scripts causing errors in your data. See the isolation article in this series for examples of correct and incorrect code.
Database Schema and File Structure
- Deleting records can be messy for many different reasons. As it relates to ACID, deleting records is a little harder to perform in a FileMaker atomic transaction compared to creation and update. It might also work to just flag records as inactive or to-be-deleted. Then later delete records in a nightly or weekly “empty the trash” style script that runs on a server schedule.
- Vertical partitioning can reduce lock conflicts. If a wide table is used by two different job areas, you might partition that entity to reduce the frequency that users collide trying to access the same resource. This can be beneficial to both direct user-record interaction and scripted transactions.
- Curtail Full Access users. Any effort to establish consistency in your data through field validations and security is of limited usefulness if users regularly interact with a production system using a Full Access account. Unprotected actions while in full access mode could be accidental (deleting records), intentional, or even malicious. Even “developers” rarely need Full Access on a production system. When they do, consider giving them a second admin account to be used only when necessary. Avoid, or at least use extra care when enabling “run script with full access,” as this can lead to the same types of problems.
- Consider removing the “Import Records” via a custom menu set. The import step includes an option to bypass auto-enter field options. While useful to an administrator, this could lead to problems with all kinds of inconsistent records, missing primary keys, etc. There are alternatives, so just be aware of this issue.
Server Machine
- Setup a reliable server. Using low quality hardware, outdated software, or untested plug-ins can all increase your chances for a crash. Consider a burn-in or trial period for a server before trusting it with valuable data. Avoid bleeding edge “.0” releases of software unless there are other more important factors. Your server and it’s peripherals should be on an uninterruptible power supply. If you can’t provide reliable power and cooling then consider using a provider that can maintain your server in a data center.
- Setup a fast server. Using a storage system on your server which is slow during random writes will not only impact performance for your users, but also increases your window for data loss in the event of a crash. When updates are first received by the server, they are in-flight while stored in RAM. A fast, ideally solid-state, storage system allows the server process to quickly get that information onto the runway and stored in the database file. Not only do you want a fast volume for database storage, but you also want a fast volume for the operating system where temporary files are stored. Your backup volume also should be sufficiently fast. Measure the amount of time it takes to backup AND RESTORE all your database files. Is it fast enough?
Database Administration
- Develop a robust backup strategy & backup often. Consider utilizing Progressive Backups. Schedule regular fire drills. Be prepared to restore from a backup, knowing how you will do it and how long it will take. FileMaker backup strategies are a much larger topic than we can cover in this article.
- Spend some time adjusting your FileMaker Server’s RAM cache setting. The statistic of cache unsaved % multiplied by the RAM cache setting is roughly the amount of record data you are subject to loose in a catastrophic failure. It should often be 0%. For purposes of record updates, setting this to a very low number might slow the performance of clients who upload changes to the server. Setting this cache high could allow more data to be in-flight, and at risk. The RAM cache is also used for reading, so a larger cache should improve performance for frequently accessed information.
- Welcome to 2016 and solid-state storage! A RAM based cache was vital to performance when using hard disk drives because the latency to access information from a platter was so large in proportion. However, solid-state storage does not suffer this way. RAM is still faster than SSD, but not by so much. Assuming that you have solid-state storage for your database server, you can probably start with the minimum RAM cache setting and then adjust it up if you see a low cache hit rate in your statistics. Even as you increase the cache size, very little unsaved data should be ever accumulate in the cache when using SSD storage.
- Consider partitioning your data because of size. Large files take longer to scan for consistency, recover, or restore from a backup. Increasing the number of files by some reasonable factor may compartmentalize your risk and reduce your time to get back up and running.
- Consider partitioning your data because of update frequency. A file with frequent updates appears to be (naturally) at higher risk for data loss if a crash occurs. Isolating data tables into additional files may further reduce risk.
- Larger organizations probably want to write out a specific crash recovery checklist. Those steps could describe when and how to restart the server, and logs that should be examined. It may include guidance about the relative safety or risk of reusing suspect files after a crash.
- Configure the server to periodically perform a scheduled verify (consistency check) during off hours. If you want to be even more proactive, it may be helpful to periodically take important files from a backup and run them through the recovery process. The logs from this process have the potential to let you know about file corruption that may have crept into your files that would otherwise go undetected.
About the Author
Chris Irvine is a Senior Technology Consultant at Threeprong.com LLC, and a certified FileMaker Developer. He develops on multiple platforms, specializing in custom workgroup solutions, B2B integration, and performance tuning. Contact Threeprong for your next custom solution or for help taking your existing solution to the next level.
Previous: ACID – Is FileMaker Durable? (Part 5)
0 Comments
Leave a reply
You must be logged in to post a comment.