Under review

Suggestions for new variography and geostatistics modelling tools

fbilki (Moderator / Admin (AUS)) 10 years ago in Resource Estimation updated by dbartlett 9 years ago 26
Hans Anderson's original post regarding importing semi-variogram parameters has gone off topic (mostly because of my instigation), so I've created this thread to gather suggestions for the new variography and keep Hans' thread on-topic.

You can see the original thread here: http://forum.micromine.com/topic/413921-semi-variogram-paramaters-window/
This isn't pretty but is apparently the only way to selectively move posts from one thread to another.


Hi Frank,

I would say linking of views, variogram maps and tables but looks like your onto that. From a variography point of view it would be great if we could do correlograms and gaussian variograms (ie do a Gaussian/normal score transform of the data - model the variogram on this and then do a back transform of the variogram parameters to use them in an OK estimate). I find if I can not use raw I will fall back to a correlogram or Gaussian and at the moment I have to go outside MM to do that. The tidy up of the forms and process you seem to be doing is is a big plus.

Olly Willetts
Guessing Sync Selection is rolling out in a later beta Frank? Current beta seems to have the buttons in the graph windows, but the option in Vizex is currently hidden? Seems a really good idea.

On the variography, it would be great to be able to compile all your directions in a single window. I find the single plot per window isn't a very effective use of space and tiling the windows vertically/horizontally isn't great. I tend to work with a host of smaller plots similar in style to a matrix plot to provide a good overview for comparative work.

Also the ability to change the scale of the axes would go a very long way. Auto scaling can be a right pain when you're looking at small quantities and the variance spikes.

Would this be better in a dedicated thread?
fbilki (Moderator (AUS))
Thanks Ron. I'm sure you have your own thoughts on this so please do feel free to add your suggestions to the mix.

Looks good, will be great to see the completed version.

fbilki (Moderator (AUS))
Here is a preview of the new variogram map. So far we have only completed the map display; we still need to add the directional selection (and potentially the measurement of an anisotropy ratio):

What might not be obvious in the image is that clicking a cell in the map has selected the corresponding samples in Vizex.

Most of our new statistics tools work this way once you enable the Sync Selection options from the corresponding window or toolbar: if the same file is loaded in Vizex, the File Editor, and two different Stats windows, making a selection in any one of them will select the corresponding records in the others.

It's a great way to see how many lags are affected by any individual sample, or to locate potential outliers by selecting them in the histogram or box plot.
@Olly, Sync Selection is in the current Beta, but it's a per-layer operation in Vizex. You can enable it via the right-click menu on the intended display layer. You could conceivably have different layers linked to different charts.

On the subject of different charts, you can also open multiple instances of any given chart, making it easier to compare different results. (We do have ideas for directly comparing different data sets -- such as a block model against the original composites -- but we've got a lot of work to do before we get to that point. Right now we're concentrating on the basic [i.e. essential] workflow.)

@Ron, how important are the alternative variogram types? For now we are just looking at getting the workflow right by supporting normal and log-normal variograms. We've already had a couple of requests for correolograms so there's a strong chance of including it. There is an existing issue in our development database, which I'll update, but it would be good to know what you think the priority items should be.
@Olly, regarding compiling different directions onto one chart, it will work in much the same way as an existing variogram fan does now.

The only real difference will be that instead of having to lay in a massive fan just to find a direction, you really only need to display the variograms of interest. So you might have a couple of different versions for the three directions and you can happily display all six graphs on the one chart. We are also investigating matrix-style graphs, but we are initially concentrating on basic workflow issues.

The biggest change to the variogram form will be the conversion of the ugly fixed-length matrix of entries into a dynamic grid control, with a bit of a clean up for the various display modes. And automatic colours too. :-)  It will support people who:

  • Prefer to work in the old way using a large variogram fan (and they can now use any number of directions instead of the old fixed limit of 16), or
  • Use the variogram map to interactively find the three directions and then model just those directions, or
  • Like you who need to display a mixture of different graphs.

The basic functionality has always been a part of MM but will be much more streamlined in Micromine 2014.

Hi Frank,

I think the correlogram should be a high priority as it handles a lot of datasets that are not log normal (and so probably shouldn't be assessed as such) but for which a normal variogram is too messy to get a good model from, some resource geos I have spoken with will not use anything else, especially for gold deposits.

I know a few (large) companies that have a requirement for pairwise to be used (esp for the likes of copper porphyries) so while I might not particularly agree with them a Pairwise option probably should also be considered down the track.

As for the Gaussian / Normal Score variograms - I once had an influential geostatistician say to me - "As long as you do a Gaussian then few people will argue". For populations that show diffusivity the Gaussian works well. It is also something that is a requirement for more advanced estimation methods that are gaining strong traction amongst the bigger companies - methods such as UC, LUC, SGS Conditional Simulation, direct-block simulation and LMIK - both in the estimation and/or as part of the change of support. If Micromine wants to be a able to stand up as a credible alternative for resource estimation they will have to consider these methods at some stage and the Gaussian variograms would be a good lead in to these. We use LMIK and SGS ConSim regularly and if Micromine could do these now (including the Gaussian variograms/back transform/change of support) I would be able to save the company the significant cost of using several different programs.
Thanks for the excellent suggestions, Ron.

We'll try to add the correlogram and gaussian variograms as soon as we can, but we need to sort out the basic worfklow beforehand. I think the incremental difference of adding a couple of extra variogram methods will be small anyway, but I can't promise they will be ready by the time we release MM 2014.

As a general comment this applies to all of our new stats functions: the focus in MM 2014 is on developing the right workflow. Whilst we know of many other features that should be added, breaking the job into chunks makes it easier for our developers to concentrate on specific tasks and produce a better result.

Micromine's "Display relative semi variogram from file" mode has always displayed a Pairwise Relative variogram; for some reason the guys who originally wrote it back in the 90s chose not to add this information to the documentation. It will all be refreshed for MM 2014 anyway.

The conditioning and simulation methods have been mentioned to us on a few different occasions. We know that this is a massive topic and it would be great if we could get some industry guidance so that we can focus our attention on the important bits.


If you're covering the basics with this update Frank, the ability to produce and export a clear, report-quality graphic from any of the geo/statistical functions is essential. The current copy/paste/export method does not cut the mustard as it lacks proper scaling of labels, line weights and fonts, as well as flexibility in output image size, aspect ratio or output resolution. More granular formatting options for graphs would also be appreciated if your intent is for users to work exclusively within MM. 

After using Mathematica for quite some time, I find the workbook-style workflow is very natural for data analysis as you can create a cohesive story rather than a mass of discreet fragments. This is probably at odds with the forms-based function workflow that MM employs, but it's worth putting out there as it is an extremely effective way of conveying information to clients. 
Thanks Olly,

I just want to clarify one thing: when you say "The current copy/paste/export method does not cut the mustard", do you mean the current MM 2013 method or the copy to clipboard method in the MM 2014 Beta? We have responded to user feedback about making it as easy as possible to paste stats graphs into a Word report and we feel that the new workflow is a major improvement over the old way.

Can you please provide some examples of your very natural workbook-style workflow along with suggestions on how our typical user base might benefit from it?
Right you are RE Pairwise, just checked to understand - it is a little opaque as to what it is referring to but is there all right.

Incremental additions are fine - better than big bang additions that are not ready and full of bugs. With respect to the conditioning and simulation work, in my conversations with various people there is a big push on to be able to better estimate the local metal distribution as opposed to simply get a global understanding of the total number. Conditioning methods like UC / LUC / LMIK allow this to a better extent than simple OK or ID estimations (although in my experience they may be no more accurate!). The simulations are being used more and more to control grade control estimates, and to obtain a handle on likely risk around metal.

For Micromine the LMIK method should be relatively simple in that you already do MIK for the panels and the localising step is simply an OK estimate into SMUs, then it is just a case of developing the math to bang em together (See Abzalov, 2006, Mathematical Geology, Vol38, N4, pp393-411). After the change of support step we use the proportions of grade in each MIK bin to assign a grade to the ranked OK SMU estimate. We do this in isatis which we trick into thinking we have a UC estimate - this is a nightmare to do and frustratingly difficult but we get there in the end - shouldn't be that hard. LMIK is probably one of the newer methods starting to gain traction. This leads to the UC and LUC methods (both of which are just OK estimates conditioned to a SMU size and are starting to be used more widely - sometimes in-appropriately but that's always the case!). If you can get the Gaussian transform and change of support steps put in then this goes a long way to making these methods possible.

The ConSim methods are fairly entrenched now in the grade control circles and for risk based studies. They are not really used as a resource estimation method (other than perhaps the Direct block method) as each simulation is as correct as any other but they are a widely used tool. In actual fact there should be no reason why you could not use the same Localising Maths for MIK and UC to generate a "localised" simulation with one grade per smu block - it just works off the proportional histograms for each panel. With simulation as I am sure you are aware, it is not just the individual simulations (all 50, or 100, or 200 of them) you have to run, it is the ability to do something with them that is important. Having 100 simulations is not much use without being able to report all the stats like median case, mean case, 95th and 05th percentiles, being able to plot all the GT curves, assess the simulated variograms and histograms, etc etc. This strikes me as a "chunk" of work in its own right as it does not really fit with what MM can do at this time - unlike UC or LMIK which just needs some additional coding and a workflow to fit with what you already do.

Yeah I know I am understating the work involved and I would not expect all this for the next release but it is a direction I think Micromine should head. You could either do all the work yourself, or put a front end to open source coding that you have permission to bundle into the installation. Just a thought.
Tried the beta copy/paste out this morning and sure, it does what it says on the tin, but it's still dependent on your screen resolution and window size (although it's interesting that you can scale the graphic by manually shrinking your window). The problem comes if you want a large number of high-resolution images at a specific size - something you'd likely macro up for an appendix. 

The workbook essentially consists of having all your analyses and annotations in a single, flowing document along with in-line graphics and analyses which are created on demand from a live data source and exported to high-quality, properly-scaled images automatically. It's a live, coherent statistical summary of a project, rather than a collection of individual formsets/functions which have to be run and results displayed individually.

A typical workflow may comprise:

  • ODBC data connection & preparation
  • Function & key parameter definition
  • Review of points of observation
  • Sample/Lab QAQC
  • Grade/sample statistics
  • Variography & model fitting
  • Resource model validation
The structure of the workflow varies depending upon the commodity and task at hand, but it has proven really flexible to date and I've thrown a bunch of different situations at it.

I'd imagine that a typical user of the statistical tools in Micromine would find a dedicated, centralised repository in which the could compile their analyses, thoughts and parameters alongside the graphical plots really useful. Some folks may work directly into their reports, but a visual staging area really has helped me and some of our clients look at their project data differently. There may even be a way to link such such an entity into the resource estimation procedures...
Thanks again, Ron. This is clearly a direction we need to take. We've been hunting down the reference you cited and generally trying to obtain more information on these methods.

The developers' schedules are pretty full for now, but when we do turn our attention to this would you mind being an industry contact so that we keep the new features relevant and focused?
No worries Frank, I receive the odd call from Paul Hooykaas so I guess I am already a bit of an industry contact for you. I am happy to provide whatever help I can, I am no expert in these fields but try to stay abreast of them (actually in my job description I think...).  
Understood and much appreciated, Ron. Thanks.
Olly, thanks for pointing us towards a better workflow for capturing multiple images with a specific size. The developers are investigating practical ways to do so.

The idea of laying out multiple charts onto a single canvas is part of our original spec for the new stats tools, although I'm sure you can understand why it's currently on the extended functionality list. You can see a preview of it in the Box and Whisker plot: when you enter multiple group IDs each group is displayed as a separate chart. You can currently have up to six groups. Here's a quick preview:

A more generic version of this would obviously need separate titles and legends, but this should give you an idea of where we are headed.

Hi Frank, since your asking here is a list. Not really limited to variography or geostatistics but here are some improvements that would make things a bit easier when estimating resources:
  • A search ellipsoid and rotation visualizer like in Surpac has would be nice. You can select the different hand rules and orders of axis rotation and it shows the ellipsoid changing. I know you have this sort of for vizex, which I use all the time, but inside the variography or search orientation form would be nice.
  • Block by block search orientations for Kriging like MM has for IDW. So then you can override the default search orientation for specific blocks instead of having to result to multiple domains for each orientation.
  • A way to estimate a blocks of rock type A with samples coded A and blocks B and samples coded B in the same estimation pass would be nice. Currently you have to separate estimations for each rock type and add the models back together.
  • Uniform Conditioning.
  • Rank kriging, block output as true/untransformed values.
  • Jack-knife analysis, estimate back into the samples and review the scatter of the results. Know point vs. surrounding points. This is sort of currently possible but awkward using macros.
  • Inverse distance composite length weighting. Discount composites that are shorter than others.
  • A block model export tool that make an origin and rotation like most other programs use when using rotated models. Currently to export to Vulcan or Gems I have to draw using strings where the lowest corner of the block would be. The way MM sets up the rotated origin is unlike the origin setup for other programs.
  • Initiate block models within wireframes and below a topo.
  • Relative probability plot overlays with raw assays, composites, and blocks. Currently possible but not without hacking the plot file.
  • Distribution statistics in normal, natural log or log base 10

Thanks for the very comprehensive list, Geoff. Some of your suggestions are already on our list or are simple additions to similar items.

Because your list is a little off topic (but entirely worth discussing) I'm going to start a new thread for suggestions to our block modelling/interpolation tools and drop it into that.

I am enjoying the new stat tools in 2014!

Question regarding the box and whisker plots, I am having trouble displaying several boxes side by side.
What I am looking to do is display Au for several different rock types. I am clicking on group expecting a column selection to pop-up and to be able to select a group id field like a 'group by' in sql but it only accepts numeric input. I am not sure how to use the group field.
What I am looking for is one box and whisker for plot for each rock type.

Also I can't figure out how to generate a plot file from any of the stat output charts. I see export to image but the plot file option is missing?
Under review
Good day Geoff

The group column allows you to place the box and whisker on different lines or in different groups so if we have no grouping the plot would look like this.If we have grouping as seen below the box and whiskers will be the plots on multiple lines and fields with the same number will be grouped together.

So for what you seem to be trying to do you would have to create a new file and cut up your existing file by rock type and place the data for different rock types in different fields and then create a box and whisker plot.


You no longer have the option to plot a graph. Because you can now add titles,  labels and legends to your graphs you can now simply create a snap shot of the plot.

If you want your graph in a plot simply add it as an image.


Just to add to David's comments. The original spec for the box and whisker plots always included the ability to compare different fields and different files. As per all of these new charts we kept the feature set small for the initial release so we could concentrate on the basic framework. But we do intend to add some sort of key field option that will allow you to group by category.

For now, as David says, you'll have to create a file where the assays for each rock type are in a separate column. You can do this pretty easily using the File Editor and/or a macro. There is no need to maintain the integrity of individual rows, so you can start listing the assays for each rock type from the top of each column to keep the file more compact.

Regarding plotting, the overwhelming majority of feedback about our old stats displays was how painful the rigmarole of creating a plot file was. We took that to heart and eliminated the need for a plot altogether, based on the multiple requests to be able to copy and paste the charts directly into a report. Were you trying to do something different from David's example of embedding the chart as an image frame?



Groupby function in statistics menus
I wrote most of the below before I looked/found this thread. I thought I'd add my two cents in the hope that the squeaky wheel gets the grease.
It is good to see some improvement in the statistical tools available in MM2014.
I think it would be really handy to have the option be able to run the Quick Summary function using a group by function similar to the Report Categories option in the Block Model Reporting form.

This functionality would be handy in several of the statistical functions in Micromine e.g. to separate groups for the box and whisker plots or to colour points in a scatter plot or especially  etc.

Thanks, looking forward to the grouping in the Box and Whiskers.
As far as plots go I understand the old way was cumbersome but I had a lot of control over the output. I agree the new way is appealing but is more canned and possibly trades appeal for utility. The old way was unlimited in what you could show simultaneously. For instance with the older .pel I could overlay populations where I don't see any way to do this in the new version. With the .pel you could edit the file in a text editor to get it to do what you wanted, see below, not as good looking as the above but something essential for population comparison.


I can see that for comparing population the old version is better, we realise that comparing populations is important and this will be added in a future patch or release. For now we have not got rid of the old graphing functionality so this can still be used.

Are there other reason that you would edit the .pel file, other than titles, legends or looking at multiple populations?


Is it possible label formatting can be added to the multi-purpose charting tool? I really like how it has been setup, perfect for GT curves.
The tonnes past 1 million on the y-axis are labelled using e+ notation. This is fine for review but not that great for reporting purposes. Is it possible to have standard format and possibly the option for comma separation? Also there is no selection for just line, there has to be a point style selected, could point style none be added?
It would also be helpful if there was some way to call something out, like user text boxes and lines. So for instance you could point to the cutoff chosen or show a assay cap limit

Thanks, Geoff
Hi Geoff,

These are very good suggestions and we'll add them to the development list. I'm glad you're using the Multi-purpose chart -- we specifically had grade/tonnage curves in mind when we designed it. (Please remember this is our first release of this charting framework and we know there's lots more work to be done.) 

I've also hit the exponential notation issue with the chart and I've gotten around it my manually adding a rescaled tonnage field to the report file, and using that instead. Given the range of your numbers I'd suggest dividing the tonnages by one million and storing them in a new field with _Mt appended to its name. Then your chart axis would display much more sensible numbers and your axis label would read something like "Tonnage (Mt)".

Along with the comments above I noticed the "Distribution Tables" form is flagged as obsolete. I am currently using this as a population overlay work around. I just want to make sure when this function is removed there will be something to replace it. Is there a newer function that I am missing that generates a distribution table that could be graphed as you want?

Under review
Good day Geoff

This function will be replaced in a future release.