Summary of key review points
|What you may like
|What you may not like
|Cost over 3 years ()
|Limited set of functions
|Design of interfaces and reports
|Doesn’t affect the host model
|A legacy product
|Some Six Sigma graphics
|Not a true Excel add-in
|The largest range of technical features
|Not as well-known as @RISK
|Can’t always convert an @RISK model
|Minimal set of features
|Poor results analytics
|Large user base
If you have an extremely limited budget, RiskAMP is your only option but it is not of a professional quality. Oracle Crystal Ball is a very old product and I doubt whether it will be much improved. I found it very hard to use and it is extremely limited compared with the other professional tools. I wouldn’t buy it.
Assuming that you have a budget, your choice is between ModelRisk, Analytic Solver and @RISK. ModelRisk is the best risk modelling add-in for Excel by far, and the least expensive. Analytics Solver is the best choice if you need to do stochastic optimization. Also, though not reviewed here, Frontline Solvers offers a Data Mining tool that will complement the Analytics Solver optimization (which usually requires a lot of data to be meaningful). That will double the cost, but the data mining capabilities are impressive.
In my view, @RISK is hugely over-priced compared to the other but gets away with it because it was the first to market and has locked in its users. The main reason for using it is if you have no other choice, or you already have a perpetual license.
How risk analysis models are built in Excel
Spreadsheets are often used to get an estimate of some future value, like the total cost of a project, the NPV of an investment or a sales forecast. These estimates can be built up from a set of calculations, perhaps over several sheets. The results are single-point estimates, and an obvious worry is – how confident are we in those estimates?
A risk analysis add-in allows us to replace any of the spreadsheet’s input numbers with a function that generates random values that reflect how uncertain one is about that number – a process known as Monte Carlo simulation. The Monte Carlo simulation add-in allows the user to flag which cells are outputs, and then automatically generate several thousand scenarios by “running” the model so one can see the range and likely values of each output. That will tell us how confident we are in the results. Excel is a great environment for running such models because Excel is so familiar and easy to build models with. This topic describes the Monte Carlo simulation process in more detail.
Why you might value my review
Most people using a Monte Carlo add-in for Excel have experience of just one product. They may well have used the same product for many years without ever comparing against others. That’s a shame because there are several good products on the market, and given the heavy annual investment one makes, I think it’s worth stepping back occasionally and checking what else is on offer. This review will hopefully stimulate such comparisons. As you can see, I’ve tried them all:
I’ve worked as an independent risk analysis consultant using such tools for thirty or so years. I’ve written several books on risk analysis using them, as well as some international guidelines for organisations like the WTO, WHO, EPRI and The Open Group.
I was the first international reseller of @RISK (in New Zealand and Australia) and headed a consulting company in the USA that had a contract with the developer of Oracle Crystal Ball when their customers asked for a risk analysis model to be built for them. My company, VOSE produces the ModelRisk software – but I hope this review manages to remain objective. For that reason, I’ve stuck to verifiable facts and avoided commenting on aesthetics and other subjective areas. I’ve only had rare occasion to use Analytic Solver, because stochastic optimization hasn’t often been a frequent requirement. I’ve not used RiskAMP professionally, but it is a simple enough tool to work with. I reviewed the latest versions of each product.
How to choose the best risk analysis Excel add-in for you
Technical level needed
First consider how technical your models might be. If you are doing something simple like adding up a list of costs, then all the reviewed products will be fine. Many technical fields use quite specific probability distributions – for example, engineers will use waiting time distributions like the Weibull, epidemiologists will use sampling distributions like the hypergeometric, and quants (mathematical modellers of markets) will use log return distributions like the skew normal. The same applies to copulas (correlation structures), and types of time series (various GBM derivatives, Markov Chains). I recommend reading about a few models in your field, noting which distributions, time series and copulas come up, and making sure they are featured in the software.
Simulation speed is generally not that important these days, and more a reflection of the quality of the underlying code. When I started in this profession, a simulation of 500 samples would take a whole night to run but now all the products complete a run of 1-2,000 samples of a similar model in a couple of minutes. There are two scenarios in which speed is important. The first is when you need a high level of precision on a tail probability. For example, in capital allocation models for banks, a regulator can require such a high (illusory, in my view) precision of the 99.9th percentile, one has to run 500,000 samples or more. That can take quite a while, and the software needs to have the memory to handle such large simulation data. The other exception is stochastic optimization – for example, finding the right mix of stocks to maximise the 10th percentile of a portfolio’s return. Stochastic optimization will require making hundreds of reruns of your model, and then you should just get Analytic Solver which is designed to meet that requirement.
The maths of probability distributions
Next consider the types of calculations you will want to apply to the distributions. For example, you may want to sum a large number of identical distributions (convolution) – in which case, the software should have the facility to do this in a single cell or your model will explode in size. You might be interested in extremes – for example, engineers design structures to resist the largest wave, wind gust or earthquake tremor, in which case some extreme value functionality will avoid a lot of modelling complexity. You might also need to divide one distribution by another (reverse convolution), which is more tricky than you might imagine.
Graphical user interfaces
We don’t typically have a lot of intuition about probability, so visual cues can be very helpful. At the minimum, one should be able to click on a distribution function and have an interface pop up graphing that distribution. RiskAMP does this for only a few distributions, which marks it down in my view. Better still is some interactivity – e.g. be able to move a slider across the distribution and read off probabilities. The more varied the functions, the more interfaces needed, which ideally should pop up with a single click of an icon. The products reviewed here have quite different designs – you should look at them carefully, as they will be used a lot, especially when checking or explaining your model.
Random number generator
People get worked up about the random number generator (RNG) that is used. The Mersenne Twister is used by all the products and is the de facto standard. More important by far is how the software converts the RNG value to a sample from the applicable distribution – in my experience that is where things go wrong. Being able to control the seed value is quite useful, as it means you can reproduce the same results exactly from one run to another. Being able to see a specific scenario back in your model allows you to find rare errors or see scenarios that generated specific results. Stratified sampling methods (Latin Hypercube, mid-point Latin Hypercube, Sobol) don’t add any precision unless you have a very, very small number of distributions in your model, and then your model will run extremely fast anyway so you can get more precision just by running more samples.
If your colleagues are all using one tool, you might find it easier to go with the flow, or your IT department may force you to use a particular product. That said, if you have many users, your purchasing department might be persuaded if switching can save them hundreds of thousands of dollars (which does happen).
Technical support is also more important than you might imagine. Add-in developers for Excel are at the mercy of Microsoft who has been known to change how Excel behaves without warning, affecting add-ins and causing all manner of confusion. The SPILL feature is one such example. Your organisation’s IT security protocols may also change which can affect getting updates installed and accessing results files. At VOSE, we have installations in critical government agencies that require quite a lot of individual care. All the suppliers will profess to having excellent tech support, so try putting in a ticket as a trial user and see what the response is.
Example models and videos are really helpful. Online training options are also quite useful but check the costs as they can be as much as the software. I have a Udemy course for beginners that’s pretty good and will set you back the price of a prawn sandwich. On-demand consulting (e.g. buy a number of consulting hours to be used when needed) is particularly valuable if you are new to risk analysis – not just how to click buttons, but things like ‘how do I model this problem?’ or ‘what is the most effective way to show these results to my executive?’.
Slick reporting is also high on people’s list. That usually means copying a nicely formatted graph into your PowerPoint or Word presentation for the boss. I’ve generally found that the boss isn’t that fussed about the nice formatting, but an elegant presentation is always preferable over a scrappy one. What will be more important is being able to update a presentation quickly. Some last-minute assumption changes just before a board presentation can be really stressful. It’s worth making a test – create a report, and checking how quickly you can update it, and how consistent the look is after the update.
Offering a trial version
Offering a trial version – personally, I wouldn’t buy software I couldn’t try out first (even if a colleague recommended it). All the reviewed products do this direct from their web site. Usual time limit is 15 days, expect some watermarks. ModelRisk is a bit unusual in that it morphs into a basic version you can keep using after the trial period is finished. Make sure you can find the uninstall instructions before installing. Don’t wait until day 14 to start testing, you may not get an extension.
Future proofing – no doubt you’ve noticed that software is moving online. Ask the developer about their road map.
And saving best until last – price. You should compare prices and value metrics in the table at the end of this article.
The top 5 Monte Carlo add-ins for Excel
Dan Fylstra was one of the early PC software pioneers. He helped develop VisiCalc in 1978, the first spreadsheet app on the market. Later, he was awarded the contract to code up Excel’s Solver, and down the line that resulted in his company, Frontline Solvers, creating a host of optimization software products. One such product is Analytic Solver, which is an add-in that combines Monte Carlo simulation with optimization. This means that you can build a model with a few decision variables (i.e. variables you have control over) and set Analytic Solver the task of finding the values of these decision variables that optimize (maximize, minimize) some stochastic result (like the 95th percentile of profitability).
You can see from the screen cap above that the Simulation Model section is a small part of the menu. Analytic Solver has focused on optimization. Dan’s experience developing Excel’s Solver gave him inside knowledge on how Excel’s calculation engine works, so Analytic Solver is easily the fastest of all the Monte Carlo add-ins. There are a number of interesting built-in tools outside risk analysis – the AI Agent (a help robot) worked really well on testing, and there are options for publishing to the Web, etc.
- What you may like
- Stochastic optimisation
- Simulation speed
- What you may not like
- Limited set of functions
- Design of interfaces and reports
Oracle Crystal Ball
Crystal Ball was one of the earliest Monte Carlo simulation ‘add-ins’ for Excel. It was developed with the idea of overlaying uncertainty onto numbers that were present in a spreadsheet. So it didn’t change any formula inside the cells. The idea is seemingly appealing, but in reality I can say from experience (see intro) that it was very hard to build and check a model built with Crystal Ball. Crystal Ball was developed by Decisioneering, Inc and sold to Hyperion, which was almost immediately sold to Oracle Corporation. Thus, Oracle became an almost unwitting owner of Crystal Ball, which has seen no development since – the latest version in 2012 was 11.1.2, the latest version now is 11.1.3.
- What you may like
- I’ve heard of Oracle giving Crystal Ball away to its large customers as a deal sweetener
- What you may not like
- A legacy product
- Not a true Excel add-in
Going carefully here, since this product is our own. ModelRisk was born out of the work my risk analysis consulting company was doing because the tools available at the time were rather crude. We introduced time series, copulas, MLE fitting, various results analytics, some thirty new distributions, etc, etc. to the risk analysis add-in market. You’ll see that these features are now basically standard in risk modelling, so that’s something to be proud of. Some other features were a bit tougher to copy – for example, how we use Objects, which is really very powerful and opens up very simple ways to solve quite difficult problems. The way ModelRisk collates, analyses, presents and shares results is also unique – it uses a sister app called the Results Viewer that ships with ModelRisk (see below). That is worth looking at carefully when making comparisons with other products. If you are doing some very technical work, ModelRisk is likely the best candidate as it has the greatest number of tools. ModelRisk is also available in a number of languages – maybe not so important to you as you’re probably reading this in English, but perhaps you have colleagues in other countries.
A unique feature is the ability ModelRisk has to morph its ribbon and menus to suit your profession. You can keep it simple and show just the basic tools, or those an engineer uses, or a finance person, an insurer, an epidemiologist, etc. And it remembers (like @RISK, to be fair) which distributions you use most so you don’t have to wade through them all.
- What you may like
- The largest range of technical features
- The Results Viewer
- The price
- What you may not like
- No decision trees
- No stochastic optimization
@RISK first appeared as an add-in for an early spreadsheet called Lotus 1-2-3. Lotus 1-2-3 functions began with the @ sign, like @SUM(A1:A3), hence the unusual name. @RISK was developed in the 1990’s by Palisade Corporation, a family-owned company headed by Sam McLafferty. Palisade Corporation developed a number of other Excel add-ins but @RISK remained the dominant product. Alongside Crystal Ball, it was the first Monte Carlo simulation spreadsheet add-in on the market. Unlike Crystal Ball, it used the host spreadsheet’s protocols for their functions. @RISK remains the most commonly used Excel risk analysis add-in to this day, largely due to the early dominant position it established.
Sam passed away in 2017, and the company was sold to a private equity firm who reduced headcount and raised prices. They then sold it on to a much larger private equity firm, TA Associates. I don’t think any of the original development team are still around.
@RISK is undoubtedly a mature and sophisticated product. It has many of the features provided with ModelRisk, and largely uses the same function protocols so it is a simple matter to convert an @RISK model to a ModelRisk model. The reverse is not always so easy as @RISK misses some of the capabilities of ModelRisk, like ‘Objects’. However, for most users they will have all they need with either product.
@RISK and ModelRisk both offer trial versions, so key differences to review are:
- The number of user interfaces for functions, and how they are accessed (e.g. ModelRisk’s ViewFunction feature)
- The results interfaces
- The ability to share results without sharing the model
- How each product handles stochastic “multiplication” – the RiskCompound function in @RISK, the VoseAggregateMC function in ModelRisk, and stochastic “division” – the VoseStopSum function in ModelRisk
- How cells are labelled as inputs and outputs, with names, units, etc
- What you may like
- Good, stable product
- Large user base
- Plenty of example models
- What you may not like
- The very high price for what you get
- Only available in English and Spanish
- Limited tools for complex problems
- Difficult to share results and create reports
RiskAMP is produced by Structured Data LLC. It is the least developed product in this list, but it’s also easily the cheapest. RiskAMP lacks most of the user-interface features that one expects in an Excel risk analysis add-in – for example, an integrated help file, or an interface for displaying results. Recommended if you are teaching a class on basic simulation, or if you have an extremely limited budget.
- What you may like
- What you may not like
- Very minimal set of features
- Minimal interfaces
- Results dumped into Excel charts and tables
Notes on some features
Fit distribution to data interface
I’ve chosen this interface to show each product’s approach to interface design and technical focus. The same data were used with each product. There are a few differences in approach described below.
In my view, one should not fit certain distributions to data. For example, the best fitting Uniform distribution to a data set just takes the data MIN and MAX as its parameters, which implies the variable could not take a value outside the range already observed – a very strong assumption and patently absurd if one has two or three data points. Good software should help the user avoid such mistakes.
I also think we should consider the stability of the estimated parameters. For example, if you have 3 measurements of the height of people, you could fit a Normal distribution to that data set (with its mean and standard deviation parameters). But acquire a fourth data point and the parameter estimates will likely change a lot, whereas if you had fifty data points they won’t. Incorporating the uncertainty in the parameter estimates is done using a technique called parametric bootstrapping.
Finally, it is important to recognise that a distribution with say four parameters has more chance of matching the data than another distribution with two parameters because of its greater flexibility in shape. Statisticians use Information Criteria to adjust for this. ModelRisk introduced Akaike Information Criteria (AIC) some ten years ago, and @RISK and Analytic Solver quickly followed.
Analytic Solver distribution fitting
Analytic Solver’s algorithm for fitting a distribution to data is excellent. It will only place a sampling function for the fitted distribution into the model, with no option to reference the source data inside the function. It also does not consider parameter uncertainty.
Oracle Crystal Ball distribution fitting
Oracle Crystal Ball still uses old fitting statistics (Anderson-Darling, Kolmogorov-Smirnoff, Chi Squared) instead of the more modern Information Criteria. That’s why Oracle Crystal Ball has the 4-parameter beta distribution fitting better than the normal (which the data came from). Like Analytic Solver, it will only place a sampling function for the fitted distribution into the model, with no option to reference the source data inside the function. It also does not consider parameter uncertainty.
ModelRisk distribution fitting
ModelRisk uses the Akaike Information Criteria (AIC) and allows comparison using a Likelihood Ratio (LR). It has the option to reference the source data inside the function. It also has the option to incorporate parameter uncertainty – shown here using multiple fit lines. A Normal distribution linked to the fitted data looks like this:
@RISK distribution fitting
@RISK’s algorithm for fitting a distribution to data is also good, though not completely consistent in its fitting method (note the Uniform distribution). It will place a sampling function for the fitted distribution into the model with or without reference to the data. It cannot incorporate parameter uncertainty into the model but is able to show it in the fitting interface (which misses the point). Note the mode estimation is technically meaningless and should be ignored. A Normal distribution linked to the fitted data looks like this:
Presenting the results of a Monte Carlo simulation
After a simulation, with Analytic Solver you can navigate to cells with inputs or outputs and a thumbnail graph will pop up. Click the thumbnail and a full chart will open, which can then be edited. Make sure to remove the 3-D effect.
Oracle Crystal Ball
Oracle Crystal Ball produces many pop-up charts when a simulation is run. The charts themselves are quite attractive, but the sheer number that can be generated, and the difficulty of specifying each one, makes it a challenge to manage.
ModelRisk has a separate Results Viewer app that opens when a simulation is finished (or can be made to stay open and update during a simulation run). What makes it different from the other reviewed products is that in the one window one can create as many tabs as needed, each with its own chart or table, selecting inputs, outputs and simulation runs as appropriate. The collection of charts can be formatted as much as needed, directly exported to different file formats. The Results Viewer settings are stored in the Excel file so when the model is rerun it automatically creates the same graphs.
Click the Start Simulation button and a chart will pop up that displays the results as they progress for the first model output. Type ALT+right arrow and it will display the next output. The small icons at the bottom of the window allow changing the type of graph for that output. When a simulation is finished, you can select a graph from the dropdown list in Reports, then select a target cell to see that graph. Being used to ModelRisk’s Results Viewer I found the many steps needed to produce a chart, and the resultant many open chart windows, very tedious.
Decision trees are graphical depictions of a sequence of decisions and chance outcomes. They are an old decision analysis tool in which one could calculate in a crude way the best option to take. The graphs are somewhat useful for visually explaining some decisions and can easily done in a tool like Visio. The analytical method has been superceded by running multiple scenarios using Monte Carlo simulation, which are easier to build, more flexible and far more realistic. It’s a mystery to me why they still exist but some vendors have a decision tree component to their risk analysis add-in. Excel isn’t a graphical tool, and in my view, it is trying to put a square peg in a round hole to build a decision tree in Excel. Both Analytic Solver and @RISK (with their Precision Tree product, not reviewed) do this. The incompatibility with Excel is easily illustrated. Here is a decision tree built with Analytic Solver:
In this decision tree, the numbers and formulae are in cells, but the lines are drawn on top, which can be messed with like this:
Feature comparison table
Cumulative cost (USD - see reference):
|Extra reparameterised distributions
|Correlation models (copulas)
|Time series models
|Total functions (approx)
|Speed trial (secs)
|Languages other than English
Value for money metrics:
|Prob models / $k
|Functions / $k
|Samples/sec / $k
Features check list:
|Follows Excel add-in rules
|Cloud deployment possible
|Interface adapts to your usage 
|Monte Carlo simulation
|Multiple simulation runs for scenarios
|Run macros during simulation
|VBA and C++ functionality
|Control of precision of results
|Independent streams for variables
|Recall sample in model
|Reverse convolution tool
|One-click opens relevant interface
|Help file integration
|Functions return informative messages
|Time series fitting
|Using modern (IC) fit statistics
|Link fitted model to data
|Parameter uncertainty simulated
|Results shown in interactive window
|Histogram, cumulative plots
|Box, Scatter, trend plots
|Single window for results
|Share simulation results with non-user
|Six sigma statistics
|Related tools included
|Stochastic numerical integration
|Stochastic differential modelling
|Extreme value modelling
|Model converters for:
|@RISK, Crystal Ball
- ModelRisk's ribbon can be shown in different modes depending on your type of use
- This represents tests of whether functions operate at parameter limits instead of returning an error
- Via PowerBI or Tableau
- Using either the VOSE Results Viewer or Pelican web platform
- Available separately
- Available but will double the price
- Extensive Data Mining tool available but will double the price