Planning a laboratory automation project can be a complex task given the increased demands of modern lab environments. to help make your next project a success, Scinomix has published this helpful guide to explore the common topics and principles of automation projects, as well as tips and tricks to help your team stay on track.
Consistency of Inputs
Simply stated, automation and robotics love order and consistent inputs. Automation works best when its environment is predictable. If your process has significant variability or you have a wide variety of inputs, you may find it much more difficult to automate the process. That’s not to say that a highly variable process with a variety of inputs cannot be automated, only that these more difficult projects often require custom automation, increased time to implementation, and greatly increased the project cost.
Automation works best when its environment is predictable.
It is best to standardize your labware (plates, tubes) and testing samples as much as possible. For example, plant samples tested in the Ag Biotech environment are anything but consistent. This can make automation development a unique challenge, resulting in numerous custom automation solutions that take a great deal of time and money. Constant changes to processes can also be as disruptive to the development and maintenance of a productive automated system.
Automation Friendly Inputs & Process
Choosing inputs that are automation-friendly is also key. In the lab, using SBS sized plates help, but automation often has a hard time dealing with them if they are damaged (broken skirts). Some lids or seals on tubes or plates are nearly impossible to automate without some modifications or labware changes. Sometimes reagents, samples, or compounds that a lab works with are inconsistent and not automation friendly.
An unfriendly input example is in the compound management environment, where automation has contributed greatly to the storage and retrieval of compounds. In these situations, scientists would like to automate the sub-sampling of those compounds, but due to the nature of many compounds, only a given percentage can be automated with a robotic liquid handler while many others will always need to be handled manually.
One example of a process step that is not automation friendly is an ELISA test protocol where users typically blot or slap an assay plate down on paper towels or blot pad to remove any remaining fluid after the wells are washed. Many automated plate washers do a good job of aspirating most of the liquid from a well when they wash plates. In these cases, it would be a best practice to test protocols to determine if blotting is necessary. If blotting is still deemed necessary, there is also plate washing technology on the market that spins all fluid out of a plate when it washes. The point is that in some instances labware may need to be changed, or some steps may need to be modified. You can always go the custom automation route, but just make sure you have a talented group of automation engineers on your team, whether they are internal or external to your organization.
Inputs and Variables
Having a large number of various inputs and/or variables can also greatly increase the complexity and cost of an automated system. Inputs such as sample or assay lab plates should be consistent and reduced to a few variances as possible. For example, accepting samples from your clients in a variety of tubes, lab plates (deep-well, mid-well, 96 well format, 24 well format, or even different brands of 96 deep-well, etc.) can cause problems. You will need automation to be programmed to handle all of these variables and inputs and a way to detect the differences.
Another example is liquid transfer volumes. A step in your process may require milliliters of fluid to be transferred where other steps will require a sub-microliter. Each one of these will require different robotic liquid handlers, thus more cost.
Example: In the DNA testing lab, it is not uncommon to have thousands of different DNA markers. Clients of these labs may choose any number and volume of markers they need for their research. A lab may choose to build an ultra-flexible automated system to handle this variability or they could reduce the variability by offering panels of markers. This could reduce the cost of instrumentation, process steps and improve throughput at the risk of reducing flexibility. It is generally recommended to reduce inputs and variables that you have control over and then find automation solutions to deal with those you cannot control.
Process Vs. Procedure
Process, procedure, and practice are always in play in any laboratory environment. A lab ”Process” can typically be explained and illustrated with a process map. This will typically show all the steps involved in the laboratory. A lab ”Procedure” is best illustrated by written SOPs. These are typically recipes for the process steps. Having both Process and Procedure documented will go a long way in developing and implementing a successful automation system. The third item, ”Practice” is rarely, if ever, documented in any fashion. ”Practices” are the things lab staff do during the process that are not written in the process map or SOP, yet they exist in every lab and may or may not be essential to successful experiments. Likewise, they may or may not be automation friendly. These “Practices” need to be identified and evaluated to determine if they are truly essential. The evaluation is typically done through interviews with lab staff about a particular process, then making observations about how they actually operate and perform procedures (There is a formal process that does this “interviewing and observation”, called Contextual Inquiry). It is also important to observe more than one lab staff member as ”Practices” may differ from one employee to the next. These differences can create dilemmas when standardizing a process for automation implementation.
”Practice” are the things lab staff do during the process that are not written in the process map or SOP.
Automating Bad Science (or Non-Existent Processes)
Automation can enable good science, but not create it. Scale alone will not solve scientific problems. Science that does not work well and consistently when done manually will not improve when it is automated. On the same note, a non-existent process is extremely problematic to automate.
Both situations happen more often than you can imagine and are typically caused by several issues. One such cause is from the time frames associated with capital budgets, when labs typically request or obtain capital funding for a given fiscal year. The problem is that these capital dollars need to be spent during the current fiscal year or the funding is lost. Development of the science and/or process is ongoing but it is either not complete or not going well. The window of opportunity for spending is closing, so lab management makes the decision to buy automation without knowing potential outcomes. The rushed purchase of automation may or may not be the best solution or choice. This can leave the lab with less than ideal equipment and instrumentation to efficiently and effectively automate the process.
Automation can enable good science, not create it. Scale alone will not solve scientific problems.
Another cause, somewhat like the first, is when the science, process, and software (LIMS) are being developed at the same time. The goal for implementation is aggressive and the only way to meet the timelines is to develop all three at the same time. Unfortunately, this typically results in a situation where all three (science, process, and software) will need to be done a second time.
A final cause, which is probably the most prevalent, is when lab management/staff decide to purchase automation instrumentation without full knowledge of what they bought, or what it can do. Their science and process may be great but they failed to ask the right questions of the vendor or did not adequately investigate the capabilities of the instrumentation. The satisfaction of the performance of the automation is often low and leads to failure of the project.
Error Detection (Avoiding the Giga-Error)
When producing “Big Data” also comes the opportunity for big errors. Imagine an ultra-high throughput laboratory environment where big data sets are produced by a highly integrated laboratory automated system or systems. In this scenario, everything in the lab is running smoothly. Robotic arms are moving plates from one device to another, liquids are being dispensed and pipetted, plates are moving into and out of detection instrumentation and lots and lots of data is being produced. But somewhere or something in the system or process is not quite right. You cannot see it or detecte it, but it is there. Essentially you are rapidly and efficiently producing large volumes of worthless data. This might be ongoing for a day, a week, or possibly longer. This is referred to as a “Giga-Error” and can be caused by almost anything; samples or reagents not stored at the right temp, contamination, poor reagent stability, incubator not functioning at the correct parameters, undetectable instrument malfunctions, staff not following formulation protocols, or some staff member having a bad day and they put the wrong reagent in the wrong spot in the system. The list goes on….
But how do you remedy this? What is the best way to detect errors and consistently produce high-quality results? To get some ideas you can do some Internet searches for “error detecting in the lab “or “big data anomaly detection”. Clinical, medical, and GLP lab environments have typically done a better job of quality control of systems, reagents, and instrumentation. Every research lab’s protocols and instrument usage can be different, so solutions can also vary. Implementing quality checks throughout the protocols, both on reagents, assays, and instrumentation are the best places to start. These can be costly and take some time but could pay off to avoid the potential Giga-Error.