
+2
Suggestions for new block modelling/interpolation tools
fbilki (Moderator / Admin (AUS)) 10 years ago
in Resource Estimation
•
updated by ronald reid 7 years ago •
17
Copied from Suggestions for new variography and geostatistics modelling tools
Geoff Elson
Hi Frank, since your asking here is a list. Not really limited to variography or geostatistics but here are some improvements that would make things a bit easier when estimating resources:
Geoff Elson
Hi Frank, since your asking here is a list. Not really limited to variography or geostatistics but here are some improvements that would make things a bit easier when estimating resources:
- A search ellipsoid and rotation visualizer like in Surpac has would be nice. You can select the different hand rules and orders of axis rotation and it shows the ellipsoid changing. I know you have this sort of for vizex, which I use all the time, but inside the variography or search orientation form would be nice.
- Block by block search orientations for Kriging like MM has for IDW. So then you can override the default search orientation for specific blocks instead of having to result to multiple domains for each orientation.
- A way to estimate a blocks of rock type A with samples coded A and blocks B and samples coded B in the same estimation pass would be nice. Currently you have to separate estimations for each rock type and add the models back together.
- Uniform Conditioning.
- Rank kriging, block output as true/untransformed values.
- Jack-knife analysis, estimate back into the samples and review the scatter of the results. Know point vs. surrounding points. This is sort of currently possible but awkward using macros.
- Inverse distance composite length weighting. Discount composites that are shorter than others.
- A block model export tool that make an origin and rotation like most other programs use when using rotated models. Currently to export to Vulcan or Gems I have to draw using strings where the lowest corner of the block would be. The way MM sets up the rotated origin is unlike the origin setup for other programs.
- Initiate block models within wireframes and below a topo.
- Relative probability plot overlays with raw assays, composites, and blocks. Currently possible but not without hacking the plot file.
- Distribution statistics in normal, natural log or log base 10
Customer support service by UserEcho
We're going to start work on our block modelling/interpolation development list once we've bedded down the new stats and geostats tools and workflows. I'll look at your list items and add them where appropriate. Meanwhile, would you mind clarifying a few things for me please?
* Rank kriging: Are you referring to our Uniform/Rank Kriging option? To my knowledge it should be writing true/untransformed grades to the output model, so if it isn't then that should be recorded as a bug.
* Jack-knifing: Have you tried our Stats | Cross Validation tool?
There is some inconsistency in how people use the terms jack-knifing and cross-validation, as this reviewer has noted:
We define cross-validation as a tool that temporarily removes each sample and then compares it with the estimated value for that location, and then accumlates statistics on all of the differences. Micromine supports cross validation via Stats | Cross Validation.
My understanding of true jack-knifing is that it is a little more complex than that.
* Initiate block models within a wireframe: If I understand your request correctly you can already do this via Modelling | Block Model Tools | Create Blank by enabling the Restrict blank model with option. Or were you after something different?
* Distribution stats in normal, natural log or base 10: we've been experimenting with this and have concluded that using base 10 logs only changes the scaling of the axis. The appearance of the data does not change when we switch between natural and base 10. Did you have anything specific in mind? If you're in the Beta program you can have a go at the new stats charts.
We'll investigate the rank kriging issue and determine if it is a bug. If so we'll log it to be fixed.
Interesting idea of combining both a DTM and a wireframe solid. We'll give that some thought and see if we can make it possible.
Regarding the logarithmic axes, we actually did include both bases when we first started developing the new stats features but then took away the base 10 scaling when we realised that a) it didn't change the visual appearance and b) most - if not all - calculations in earth science use natural logs. It should be fairly easy to put base 10 back but we'll need to review it first.
"A block model export tool that make an origin and rotation like most other programs use when using rotated models. Currently to export to Vulcan or Gems I have to draw using strings where the lowest corner of the block would be. The way MM sets up the rotated origin is unlike the origin setup for other programs"
Can you give us a bit more information on this please? Whilst we obviously do things the "Micromine Way" we do need to talk with other applications, so knowing more about the other ways to define origins will make it easier for us to support them properly.
I have been doing some work in comparing various blockmodels which involves a lot of reporting using various grade constraints and dtms. In Micromine to do this I am required to create a flag in the model and filter the model by this flag/flags in order to report on a specific area. When you are flagging above one surface and below and another and inside wireframe x that means creating a lot of flags and significant processing time. Has there been any thought at Micromine regarding better ways of creating a constraint for block model reporting? Something like another tab on the reporting form or as a separate menu item that has several fields with check boxes such as;
Blockmodel Constraint Form
Blockmodel: [HVK201401_OK_RSV.DAT]
Existing Constraint: [ ]
Existing Constraint: Constraint Name 1: [ ]
Block Constraint: [X] Number of Block Constraints: [1]
Block Constraint: Block Constraint 1: [AUPPM] [>=][0.6]
DTM Constraint: [X] Number of Surfaces: [2]
DTM Constraint: Surface 1: [PIT:ASB_Current] [ABOVE]
DTM Constraint: Surface 2: [DTM:topo2012] [BELOW]
WIREFRAME Constraint: [ ] Number of WIREFRAMES: [ ]
WIREFRAME Constraint: Wireframe 1: [ ]
Save Constraint: [X]
Constraint Name: [Production Pit to Date]
By being able to save the constraint to a name and then recall it either into the reporting form or as a filter to display the block model would be a great way of not only of constraining the reporting of the model but also for validation and presentation, and perhaps even constraining the estimation process – especially if one of the filter options for a block model was to be able to select a pre-existing constraint file.
This doesn't get over the need to flag the volumes in the first place (which sounds like your major point)
but then at least each flagging only has to be run once rather than every time the report is run. The grade breakdown can be sorted by the cut-off set.
One problem with this is that the output may contain a whole load of rows you're not interested in but the tables that come out of the output are so ugly anyway that they always need adjustment.
Apologies if I've missed the point entirely!
Yes I know about the report categories option, my main issue however was when for instance I was asked to report on various EOM pits and design pits against PTD production and various surfaces based on forward plans and metal prices. For example to report on the last 6 months, last year, last 2 years and last 5 years I need a field flagged for each of these (second guessing doesn't seem to work - say a field with year of production - because managers always seem to ask for the one (or 20) things you haven't flagged). Say you have 25 variations to report - on 4 different models (the MIK / OK / DBSIM / LUC models to get a handle on why the gold doesn't seem to be where it is supposed to) it all adds up to a significant amount of process time just to get to the reporting stage - and then some mug asks to see the same using Block Factors!!). So its the flagging that is the issue - once flagged up it is relatively easy to report out but getting to this point is a real process. If you could open up a constraint file - make 25 copies and just change the relevant bits in each one - then report each model using the constraint files you would save potentially several days work (admittedly this sort of request is not a highly common thing but the validity still stands).
The traditional rotated origin fits a rotated block array so there are fewer wasted blocks in the full model. So when creating a origin for others to use in a different program you have to harmonize the MM large orthogonal origin and a standard rotated origin. This process is further complicated by the centroid vs. corner definition of origin.
The way I get around this is drawing strings that are rotated the same as the model and insert points the dimension of the blocks and fit a 'vulcan' rotated origin. In the other program I make a default model and then import by centroid.
I appreciate MM's import from other programs, MM doesn't care about the origin or extents, just apply rotation and blocks size and you got a model. Much easier.
Hello,
sorry for necroposting. I've found that in this topic mentioned Uniform Conditioning and I would to ask - is something done in this direction? An conditional simulation also? Do you plan to add this techniques to MM ?
I've have recently discovered and have been playing with the GSLIB functions that are available in Surpac. It would be handy as a first step for MM to integrate the GSLIB functions to avoid the importing and exporting to SGgems. Is something like this possible?
I remember raising this with MM some years ago as a possibility - adding ConSim, UC, LUC, LMIK etc as modelling methods but I did say MM should go a bit further than everyone else and incorporate the Consim into the mine planning modules - be able to run multiple simulations, say p10, p25, p50, p75, p90 (or any other selection) through the optimising and scheduling side to allow you to assess mining risk, also build it into the grade control with loss functions, profitability options and automatic digline design based on selected risk options (ie mill is tonnes constrained so pick lower risk lower grade higher tonnages options / or mining to a head grade so select a lower tonnage - targeted grade design.
Geoff and Ron,
We began investigating ways to incorporate GSLIB algorithms into Micromine as a low-priority project early this year. So, to answer Geoff's question, yes, it is possible. In fact I have already developed individual proof-of-concept components, using using Python and Tk, that:
The real challenge is interfacing with GSLIB, and we are currently assessing whether to use a third-party library (written in C++ or Python), recompile the Fortran (ugh!) source code into something Micromine can understand, or talk directly to the GSLIB executables (like my proof-of-concept). Each method has advantages and disadvantages, and the path ahead isn't necessarily obvious.
However, it's great to see this question raised in the forum, because it means there are even more people around the world who would use this feature. Although I can't promise a delivery date, I will try to move it along a little more quickly. Stay tuned.
Hi all,
I've heard a rumor that Datamine is going to collaborate with Isatis and simulation tools would be presented in DM.
Micromine must have such functions or would blown out of the water.
Not rumour - fact, Datamine have teamed with Isatis and have rolled out the Isatis UC/LUC process, KNA workflow, the Isatis automatic variogram sill fitting (I think) among other things as part of the Studio RM package. They had to do something as Datamine (studio3) was looking dated to say the least - it had not changed / modernised much for over 20 years. In look, feel and usability Micromine is a long way in front just missing some of the more advanced estimation methods.
I would like to see some of these enhancements, but I think it's more important to focus on reorganizing and enhancing the basic estimation setup. Currently it comprises a multitude of embedded forms, and the variogram ones are doubly embedded. Let's get all of the parameters on just a few input panels where you can see what you have while you are building. Right now, the best solution I've seen is avoid Micromine estimation forms altogether, building the run files in EXCEL and pasting them to a macro. Still, this requires lots of formatting, and the auditor doesn't know what all of the numbers and filter numbers refer to. On the output side, each run should then have a report that shows the block ranges, samples used and filtered, sample stats, search summary, blocks estimated, mean of the estimate, block/composite variance ratio, etc., so you can see if there was a problem right away. The basic output from a GSLIB kriging run would be a reasonable starting template.
It would be great to have a few options for reserve reports. Little things can be an aggravation--having the density always reported after volume and tonnes! No one reports reserves like that--formatting for any other purpose requires an automatic column re-ordering every time, putting density in the correct position--before tonnes (i.e., to show tonnes/grade/ounces). And the titles that you always have to replace...why not have the ability to set report formats?
I think addition of local anisotropy such as is included in IDW estimation is needed--unfolding is not always practical, e.g., in case of a dome.
Micromine does not explicitly handle soft domains. This is a significant gap.
Another item is that to accomplish a multi-pass run temporary models are required as input in order not to wipe out data from a previous run.
There should be a check box for a flag variable so you can easily track the estimation pass with a value of your choosing instead of having to "program" this into the estimation macro with an extra procedure after each run.
The variogram inputs are very unforgiving, so you have to use absurd precision if you are inputting, e.g., a correlogram from another utility.
The MIK section should be completed.
It would be very useful to include simulation in the basic tools.
Does anyone else think it would be useful to display filters for items loaded in the viewer?
Some of the GSLIB utilities would be nice to have in Micromine, and there are a few I would use regularly. Cell declustering in Micromine would be good since it's so clunky in GSLIB. I prefer the Micromine workaround for contact analysis presented by R. Reid and it could be incorporated. Others are easily reproducible in EXCEL or can be run outside of Micromine in a DOS window. A lot of this is lipstick. I'd prefer a focus on fixing or enhancing things that you can't really do outside of Micromine, as most of items listed above. I appreciate the fixes and enhancements made in the last few years but I still feel that the Micromine estimation workflow enhancements should continue with high priority.
Glad to hear someone is using my boundary workflow.
I agree that the Micromine estimation workflow is a bit cumbersome, and I concur with the output reports. They can be a little problematic, especially when everyone is expecting Tonnes/grade1/grade2/metal1/metal2 format as opposed to Micromines Tonnes/density/grade1/metal1/grade2/metal2 format. The only ones to use that format are our Mining Engineers...
We use the macros a lot as a type of "workflow" form with comments and column headers for all of the per cent fields, this means we have an auditable workflow and you can easily check, change and re-run the model, eg;
Then you do not actually need to use multiple temporary models for separate estimation passes, nor an extra macro step to flag the model, by flagging an estpass field with the pass number in the output "Add Fields" portion of the estimation form you can do this during the estimation run;
You then simply have a separate "pass2" filter for the model to filter for the unfilled cells in the domain only:
We handle the soft boundary issue by expanding the estimation domain the required amount and then using this WF to flag the composites. I actually do this in Vulcan also even though it can handle soft boundaries - mainly because I prefer to work in "hard" coded data terms rather than "soft" form settings and I can not audit (been bitten before by trusting the software).
Vulcan has one of the best workflow set-ups I have used which uses an estimation editor that has a tree with all the different forms under it, allowing you to select the Estimation ID (say OKpass 1) and then it populates all the levels as required, you simply step through the tree and enter the required values:
This editor looks back to a table set-up with each row being an Estimation ID with all the required data (a bit like your Excel idea but internal).
It works well as long as you do not start with the editor and then swap to making changes in the table - Vulcan gets lost and you end up with incorrect fields filled. Some people prefer to use the editor table straight up and bypass the form editor but I like the workflow implicit in the form editor - it is easier to read and follow. You then simply call the form and Estimation ID in a command macro (although you can step through and do it manually). It would be nice if Micromine had something similar - a form front end that you can follow down and enter/edit as you go. It could look back at the same forms as currently exist I guess, in reality it is just a minor "re-ordering" of the current Tab like set-up, with a few extra included options. I guess it is like having a form as a front end to the dynamic "Macro" which similar to how we set up our macros - contains all the links to the different forms and set-up information.