0

Easiest way to create Swath Plots in MM

rpooler 8 years ago in Resource Estimation updated by Pedro Nader 3 years ago 38
Hi All,

I'm relatively new to Micromine and I need to create easting, northing and z-axis swath plots comparing composite values to estimated blocks in several sections and plan views. I've searched quite a bit and have only come up with very tedious options. Does any one know of a relatively quick way to do this in Micromine?

Thanks.
Hi Rpooler,

Welcome to Micromine and the forum!

The simplest way to produce a basic swath plot is to display the composites and block centroids in Vizex as Point layers using grade instead of elevation for the Z axis.

To display a northing or easting swath plot, set your view to Looking North, Looking East. Or, use an angled (transform) view.

If the magnitude of the grades is vastly different from the X-Y coordinate values, use vertical exaggeration (Tools | Options | Vertical Exaggeration) to change the Z-scaling. You can toggle this on/off by clicking on the VE notification in the bottom right corner of the Micromine window.

Because you'll definitely need vertical exaggeration it means grade must always be the Z field. So, to display a plan (Z-axis) plot you'll have to use elevation (instead of northing) as the North field and ensure you are Looking North.

Be sure to give the composites and blocks different symbols or colours so you can clearly see which is which. To limit the amount of visible data to specific sections, enable Clip View to display a cross-section as if you were interpreting the drilling. Increasing or decreasing the Window Towards and Window Away will allow you to include more or less data.

This plot gives you a quick qualitative way to compare the composite and block grades and hilight outliers or errors. Obviously it doesn't do any of the quantitative stuff of a true swath plot, like counting up the number of samples supporting the grades and averaging the grade values, but you can generally deduce this information in a qualitative way by inspecting the plot. Here's an example from our training (composites in green, blocks in brown):




As you've indicated, there are ways to create a true swath plot in Micromine that require you to combine a number of separate steps. Stepping beyond your original question, I encourage you explore these methods and consider developing a workflow of your own. Once it's developed you can then save it as a Micromine macro or record it as a Python script, which you can instantly re-run whenever you need it. Python scripts are easily shared with others, too. Adding true swath plots to Micromine is in our development list and we'll get onto it in the very near future.

Frank

Hi Robert,

The way I have done this in the past is with the statistical estimate function
eg


Here I create a block model that is the swath pattern (say 40m in easting but only 1 block in RL and one block Northing)

And fill the model with the average composite grades, create another model and fill it with the estimated grades, merge the two together using Add two block models together

Then use the multi chart to plot the data

The resulting swath plot...

If you are using weighted declustered grade you should use this as your composite grades rather than raw data - or even plot them all (naive, weighted and model) to see the result..
Frank/Ron,

Thanks for the quick replies. This was very helpful!

Cheers,

Robert
Ron, out of curiosity, did you reblock your existing block model to get the estimated grades? I was developing your exact workflow yesterday but got stuck on the reblocking stage because I couldn't create an unrotated model consisting of single blocks in one axis.
Hi Frank - No much more basic than that I'm afraid - I just treated the block model as a cloud of points and simply averaged the points in the statistical model as per the composite data (did I just hear the sharp intake of breath from geostats types everywhere...). This replicates a process I have which uses Access to suck in the data and then averages the grade of the blocks and the composites in the same window. I figured if I can do it in Access I can do it in Micromine and save me the hassle of the extra exports/imports and manipulations - and Micromine Macros are easier than Access! The process works and does replicate my access database method I "borrowed" from an old consulting company.

It would be good to have a simple cell declustering method in Micromine that declusters via a moving window and writes the decluster weights to the composite file though - at the moment I do it in Isatis or using GSLIB decluster.exe. I suppose you could use the statistical model to count your points inside a specific cell size and then the calculator to develop a weighting but I am not sure how to migrate that "weight" to the composite file - it would be nice to have a method that runs through several cell sizes and plots them out as a graph to allow you to select an acceptable cell size and then write the weights from the selected cell size automatically.
Thanks Ron.

That's a very pragmatic solution. Given the huge aggregation of data into the intervals shown on the swath plot I'd imagine treating blocks as points would have a very small effect on the final result (simple mean for points vs. volume-weighted mean for blocks). Maybe that's a question for the purists to answer. :-)

I guess if you were pedantic you could calculate a volume weighted grade based on the block volume and then average that - but I am not that pedantic and have never seen the point - not really interested unless it has a material affect. I'm a Geo not a statistician and as long as it works I am happy!
Hi Ron, Hi Frank,

Just thought I'd pipe up to second Ron's suggestion for simple cell declustering! I'm yet to find a satisfactory way to do this in MM.

Erik Scheel of MM Support mentioned that rotated statistical BMs will be available in version 16? This will go along way in making swath plots possible in any direction via Franks method.

All the best,

Leon
Hi Leon and Ron (and apologies to Robert for hijacking the topic),

Can you please clarify what you mean by "moving window" when you talk about cell declustering? I presume you mean a regularly-spaced array of adjoining rectangular cells as shown in the classical textbooks? (Where 2D cells obviously become blocks in 3D.) Or did you mean a literal moving window, where each cell is centred on the point whose weighting you are calculating?

Assuming regularly-spaced cells, you can get 80% of the way there by generating a statistical block model via Modelling | 3D Block Estimate | Statistical with Write number of points enabled. The bit you can't do right now is merge (1/n) back into the composite file. However, there is a fairly simple workaround if you're happy to work in orthogonal (not rotated) coordinates and for the weights to be calculated for parent blocks only:

  1. Generate a statistical block model using whatever cell size you think is appropriate (it doesn't have to match the resource model), and be sure to enable Write number of points and Write block index. Make a note of the block definition you used here.
  2. Use the Field Calculator (File | Fields | Calculate) to calculate 1/n for the new statistical block model.
  3. Generate a matching block index in your composite file (Modelling | Block Model Tools | Index | 3D Block Index) using exactly the same definition as your statistical model.
  4. Using the indices as key fields, merge 1/n from the statistical model into the assay/composite file (File | Merge | Micromine).
  5. Voila! Each record in the assay/composite file now has a declustered weight.

Regarding adding this option to MM, as it turns out we recently added a related feature to the grade interpolators -- Accumulate sample weights. Although this particular function is very different from what we need here, it's an example of writing data back into the input file and it hints at a similar change we could make to the statistical block modelling function.

I'll add a development request for a Write declustered weights option, which will write (1/n) to the input records falling within each cell. It should be a relatively simple change (although we won't know until we get into the code) and will make the cell declustered weights much more accessible.

Frank

Hi Frank,
There are two sets of "Moving windows" that I have come across - one is simply calculate the weight inside a window of xyz dimensions, write the weights to the composite file calculate a declustered grade and move to the next window. The other is where you might set a number of "offsets" for each cell where the origin is shifted and the weights recalculated - for all offsets and then the average weight and grade recorded - I think (although am happy to stand corrected) that this is how GSLib works when you run the declus.exe program. I know you can set the cell size and number of cells, but that you can also set a number of offsets to use in analysing the data to determine the optimum decluster size. I am sure you have better minds than mine that can pull the GSLib code apart and see what it does.
Oh, yes, I recall reading about that in a textbook a while back. It seemed to me like a way to dither or randomise the cell boundaries to avoid getting poor estimates when the cell size was close to the natural spacing of the data. But I'd have to hunt down the text and re-read it to be sure.
Here's a quick extract from p. 243 of Isaaks and Srivastava:
If there is an underlying pseudo regular grid, then the spacing of this grid usually provides a good cell size. In our Walker Lake example, the sampling grid from the first program suggests that 20 x 20 m2 cells would adequately decluster our data. If the sampling pattern does not suggest a natural cell size, a common practice is to try several cell sizes and to pick the one that gives the lowest estimate of the global mean. This is appropriate if the clustered sampling is exclusively in areas with high values. In such cases, which are common in practice, we expect the clustering of the samples to increase our estimate of the mean, so we are justified in choosing the cell size that produces the lowest estimate.
And p. 81 of Goovaerts (turns out this is the one I recall reading):
When the sampling pattern does not suggest a natural cell size, several cell sizes and origins must be tried. The combination that yields the smallest or largest declustered mean is retained according to whether the high- or low-valued areas were preferentially sampled. To avoid erratic results caused by extreme values falling into specific cells, it is useful to average results for several different grid origins for each cell size.
This is possible, although slightly awkward, to do in MM by writing a python script that increments the X, Y, and Z block sizes through the desired range, optionally dithers the origin for each size combination, and calculates the corresponding declustered global mean.

We call this the incremental test and we provide a basic example, without origin dithering, in our resource estimation training course. Along with reporting the minimum declustered mean and corresponding X-Y-Z dimensions, it also creates a block model where each block's location corresponds to the X-Y-Z block size used to estimate it, and its grade is the declustered global mean for that size combination. It's a great way to see in 3D how sensitive the data is to changes in block size.

Having said all that, adding an incremental test with optional origin dithering is a lot more work than simply adding a checkbox to an existing dialog. In my opinion it justifies an entirely new menu option. But given that it's an essential EDA tool, I'll try to squeeze it into the existing development list. No promises though.
Yes I remember those references now (or references referencing those references...), these days though we tend to look at where the curve of data starts to flatten out rather than going for the minimum grade.

I don't remember doing that when I sat in on your resource estimation course a few (...eight?) years back. The training manual we got was still a draft - in places it was still in Russian English because your Russian guys were writing it, I think you said you had a couple of guys going through it at the time and changing the direct translation to something less abrupt.

I think cell declustering would be a good option to have in Micromine's toolkit so it would be good to see some development time put in. In the short term there is always the GSLib decluster.exe to pump out the results for anyone who wants to run the tests. You just have to drop the composite file out of Micromine, run the exe and suck the results back in.
Wow, Ron. That course was so long ago you must have been using kerosene-powered computers back then! (Ask me about kerosene TVs someday....)

Was I the trainer? You probably attended it just before I got my hands on the courseware in 2007. Or, maybe even a year or so after that because the first big update took me nearly 18 months to write. But, it's definitely not your granddad's resource estimation training manual any more.

I'm just writing up the spec for a new Cell Declustering menu item. Although the scope is pretty well understood it'll still be a big job, but at least it's in the system and available to be worked on once it bubbles to the top of the list.

Interesting comment about looking for where the curve starts to levels off. Here's yet another pragmatic decision that the textbooks don't tell you about.... Presumably your inspection looks for flattening starting at the low-grade end of the curve (assuming clustered high grades)?
Yes Frank, you were the trainer (I remember you started with a model of the earth to demonstrate "thinking outside the box"). I remember you saying you were working on the course work update at the time.

Yeah we look at the curve on the lower end and make a determination (eg below where there are 3 possibles) - always makes for some interesting discussion as everyone has an opinion...


We have found using the lowest mean can wipe out a lot of the natural variability that you sometimes want to capture. Drill data are always high grade clustered to some extent - I don't know too many exploration geos that preferentially just drill the low grade zones!

Very nice topic, thank you all for the tips!!!

+3

Swath plots and contact plots are such routine tools that I would like to see them added to Micromine as menu items.  The workarounds are cumbersome, even if semi-automated through scripting or macros.  In the case of swath plots, it's just quicker and easier now to dump the model to EXCEL or GSLIB than mess around in the software.  More useful to me than some of the plots available on the menu.

+1

Not sure it is easier to create in Excel or GSlib, I used these years ago to do contact plots and swath plots when I started doing Leapfrog Mining and Leapfrog Geo models (after Geo came out) , now days dump the models out and I do this process in Micromine because it is easier. Sure there is a process you have to follow in Micromine but it not particularly onerous and I like the fact that the contact plots in Micromine are normal to the wireframe and not down the hole like most other programs which can be quite misleading. I do however agree that it would be good to see Swath plots (and contact plots) added as a menu item. I am sure the process I discuss above could be integrated into a behind the scene process.

+2

Additionally to Don's comments, grade tonnage curves should also be routine in Micromine, without having to get fancy with multiple steps/macros/scripts.  I was just working with a consultant using Vulcal and then quickly generated a GT curve in Excel based on user defined cutoffs.  You had to clean the spreadsheet up a bit, but it did the job very quickly with great ease.

+1

Aaron, not sure what you mean, you can very quickly generate GT charts in Micromine by creating a GT report, then using multi-purpose charts to create the GT graph. Very quick and simple and certainly no more onerous than in Vulcan (which I happen to use regularly), certainly easier than Surpac (also use regularly).


Step 1 - generate a GT chart using the standard report...


Tip - ensure you generate a GT specific reporting cut-off set (sort largest down as this means the CumTonnes etc add correctly).

Step 2 select Multi-purpose chart
Step 3 - Fill in the Multi-Purpose chart form - I tend to use the calculator on the file to create a Kilo-tonnes or Million-Tonnes field to pick for my Tonnes Y axis - helps remove the scientific notations 1.7e6 etc.

Bob's your mothers Aunty - a GT chart. You can simply resize the axes and numbers and chart dimensions by making the Tab a Floating window and resizing it.

A very simple process, not sure what is difficult about it, certainly no more difficult that Vulcan.
+2

Swath plot in Micromine with volume Weighted grade based on the Block Volume


To create the Weighted grade based on block volume i used the same methodology that Ron Reid published here, i just had to change the grade of the swath block model.


I have created a block model that englobes all my block model varying 10 meters in RL.




I created the composite swath file 


Then I started to configure the swath block model file. I had to check the box Write block index to use as a input field later.



This field contain information about each swath layer




In Block Model Assign I fill the original block model with these block index  using my swath block model as Input, so i will have information of each block matches  each swath layer

So my block model will be populated with this information


In File -> Fields -> Expression I create an auxiliar field that is my total volume for each block ( multiply the extents of the block)




In File -> Fields -> Expression I have created an auxiliar field that is my Phosphate grade x Volume for each block,


So, I needed to accumulate these two new fields of my block model for each index. So I needed to go to File -> Fields -> accumulate. 



This was the result



So, I have to calculate the weighted grade dividing PG x Volume / Volume for the Accumulated File  in File ->Fields -> Expression


Than, we need to Merge this new field with the swath block model to complete it in Modelling -> Block Model Tools -> Combine

 The key Fields will be the Block_Index  




The Merge Fields will be the weighted grade


So, our Swath block model file is complete. Now we can Merge it with the composite swath file and plot it in multipurpose chart as Ron Taught us.


This is the result

Below is the non weighted grade Swath Plot

I hope it helps.




+2

Hi Ronald, thanks for pointing me in the direction of the multi-purpose charts! I had been expecting there should be that feature, but was always looking under the Stats menu. I didn't expect to see it tucked away under the Display menu. This seems to be the wrong place for it.

Yeah I always thought it seemed a bit out of the way but I guess you are "displaying" stuff rather than "analysing" stuff like you do in the stats menu so I suppose it makes a certain type of sense...

+2

I am sure you will all be happy that the Micromine development team is working on including a Swath plot tool for our next release. It will properly support rotated block models as well.

G-T curve in multichart functions, it is true, but it's not something I can paste in a report.  There's not much label control, number formatting and gridline control.  It will be great to have the swath plot built in, and along with contact plots, ability to specify soft domains,  soft domain search distances, and soft domain anisotropy in estimation process without having to end-run software with multiple extra process steps.  We'll look for the swather in the next release, then.  Thanks, Scott!

Hi Donald, not sure what you mean by "not something I can paste into a report", I paste them into reports all the time. It is true some more control over the various elements would be nice but you can resize the axes, scale and space the grid lines etc by editing the axes tab and resizing the graph by making it a floating window and shaping it accordingly. Sure there are improvements that could be made but I personally think it is a better result straight off the bat than any of the other GMP's where you are forced to use a spreadsheet for anything more than a simple sketch like graph. If you really want all the formatting of something like Excel then use Excel. As for the estimation process I agree there is some modernising work that needs to be done but I am sure they are working on it - looking forward to see the swath plots Scott.

Well, lots of opinions on G-T curves, not so much a priority. It more or less works, versus boxplot-and-whisker plots which are not configured in a useful way (we want to plot a grade element by rock type or some other code, not plot a series of different codes selected by a filter). Or contact and swath plots--missing, or only attainable with diminishing returns considering time versus exporting for processing in other software.


I think there are some important areas for development on the resource side. What does everyone think of the deeply embedded formsets involved in estimations--filters, searches, variograms?  Any interest in doing away with that, or at least making it so they are all gathered into a scrollable parameter file that can then be edited and/or executed, oh, and exported and audited? Once you build this you can forget about all of those mouseclicks trying to access forms within forms. 


Add ability to control the names of result variables? Allow setting of capping values instead of having to alter the input file? Scrap sector search for more industry standard quadrant/octant search (in order to duplicate workflow in other software, among other reasons)? Enable soft boundaries? In my opinion, these things are really high priority. A workaround for soft boundaries is possible and I do it, but it involves extra steps which should not be necessary and it introduces more possibilities for error.

Ron, yes, you 'can' get to a G-T curve all in Micromine, which is very good at the end of the day, which I have done many times.  Perhaps I could have been more precise with my language, however.  It would be nice to have a more singular and simple 'built in function' that combines the steps into a function that would generate the final result or graph in this case (table of the data included of course).  Don's comments about having greater control of the labels, number formatting and gridline control rings true, thus the final chart could have greater controls or be exported to Excel.  Let's turn 3 or more steps into 1 step.

Lots of very interesting comments here.  I have always found it easier to present resource reports and g/t curves in Excel so that I can cut and paste the chart directly into a report.  I export my (massaged via macro) resource summary to Excel from within MM and in the spreadsheet have a page with easy to cut and paste tables and curves that then update to reflect the current figures.  Having the ability to round to significant figures in Micromine helps (although you currently have to cut and paste the numbers as the export does not observe the significant figure formatting (this is I believe being sorted by MM)

My biggest issue is with the actual resource reporting.  I use a macro to produce my report file as I can not see a way to generate the report I want in one pass.  I want my reports to be sorted by Domain and Resource category, grade (desc) with the resource categories in the order Measured, Indicated, Total of M&I, Inferred and Total MI&I, followed by  All Measured, All Indicated, All M&I, All Inferred, All MI&I.

When reporting to 43-101 there are rules about reporting totals that include Inferred tonnes hence the category of Total M&I.  The big difficulty is controlling the sort order in the output file, the sequence in which you want to report domains may not match the alphabetic or numeric coding of the domains, similarly the resource category names do not sort alphabetically.  I manage it by adding extra sorting fields coded  X, XX, XXX.... rather than 1, 2,3 ... and then run the resource reporting multiple times with different filters etc to append data into the final MM table in the sequence I need for reporting.  The method works but is time consuming and prone to error.


Re the comment about the deeply embedded formsets involved in estimations--filters, searches, variograms? ....

making it so they are all gathered into a scrollable parameter file that can then be edited and/or executed, oh, and exported and audited?  I can see that being very useful, possibly via a view into the formset db which would allow the same formset to be used in multiple places as it is now but also would allow an editable, reportable view across the different formsets.


Fully agree with the comment about box plots,  the are cumbersome to use.  You should be able to do a plot of a field, which is automatically divided into categories based on the contents of one or more (say max 3) fields


Re the g/t curve  the general charts are way better than they were, and I find are in some ways easier to manage than excel charts. Its easy to do things like report tonnes in millions (use an expression) and grade in g/t etc.  What is not easy is highlighting a point on the lines displayed (the chosen cutoff) and controlling the tick marks on the axis. 

+1

Hi Keith - not perfect, but a bit of a short cut for reporting the parameters used in estimation is available if you use variogram control files. You can then right click to edit it, and then select the data, and paste into Excel (!) to then create a table to include in a report.


Steve

Some of the issues that Keith mentioned are on the money.  I believe the reporting is getting a big makeover for the next major release, but the user SHOULD have control on the order of the reporting fields explicitly, no modifying or any extra steps like that (ex. not the automatic alphabetical sorting of indicated then inferred then measured as it currently is; rather it should be measured-indicated-inferred).  


There should also be a simple way to report on sub-totals of reporting categories when using more than one reporting category.  For example, if reporting on 'classification' and 'mining_block' fields in your block model, you should be able to get the totals for classification by mining block and then a sub-total for classification type (measured, indicated, inferred).  Currently you would have to run a separate report for only classification (drop 'mining_block' from the report) to get the totals by classification type. Alternatively you could export to Excel and perform a weighted average calculation.

+2

Thanks to Steve Rose for directing me to this topic on Micromine’s forum. Finally, I have found some spare time to share my know-how to generate swath plots in a quick way. In our company we do write MRE reports on a regular basis and as a part of block model validation we  generate swath plots to check how block model grades correlate against the composite grades. We used to use a Micromine macro to extract the necessary data that was dumped into Excel to generate the swath plots graphs. However, to setup a macro for each project was a time consuming exercise and lacked some flexibility. As you possibly know from MM 2013 version Python scripting was introduced to Micromine and this opened up a lot of opportunities to introduce some additional functionality to Micromine projects. So, I’ve decided to invest some time and automate generation of swath plots as much as possible.

I have written a Python script that pretty much wraps around the Micromine macros to extract the necessary data then passes this data to Excel and generates swath plots for each direction (Northing, Easting, RL). I have created a Micromine project that contains this script and where I save block model and composite files that I would like to analyse. It takes me a couple of minutes to prefill a parameter spreadsheet then run the Python script that will generate an Excel file with the swath plots. I can easily change the parameters and re-run the script. The actual time that will take to run the script will depend how big  your block model is and in what step increments you would like to interrogate the data, but in most cases it will run for several minutes. The script will generate swath plots for the global data by default, but it has a switch that will allow to generate additionally swath plots for individual lodes, domains, etc. Sometimes the global comparison doesn’t provide a valid comparison, in contrast the comparison by lodes or weathering profiles or  classification domains …. provides a better understanding and more reasonable comparisons that is why I embedded this logic into the script. Of course, it will take more time to run the script for the multiple domains, but the good thing you don’t have to do anything, jut let the computer run the script and do some other tasks at the same time.

I am attaching the Micromine project with the Python script. You should unzip and attach this project to your Micromine. However, you MUST install Python and the required Python libraries, before you can use the script. I attached a Word document too that explains what it needs to be done to install Python and how and which libraries should be installed too.

Some notes.

  • I have used and tested the script both in Micromine 2016 and 2018 64 bit versions and it works as expected. I haven’t tried other versions of MM, but it should work in other versions too if you installe the appropriate Python libraries.
  • It is written primarily for an English version of Micromine and to some degree for a Chinese version. However,  if you would like to use it under other languages some tweaks should be applied to the code (just some extra lines of coding).
  • Currently the script is customised for a specific format of swath plots (colours, labelling, legends) that we would like to see in our plots. You may change  the look and format of the swath plots the way you want either in Excel file directly or if you generate swath plots on a regular basis then I would strongly recommend to set the formatting in the Python code.
  • The parameter Excel file that needs to be prefilled is in the Micromine project under name “Swath Parameters.xlsx”. Please, make sure when you specify the field names of the block model and composite files in the parameter spreadsheet, they spelled exactly the same way and have the same case. Otherwise, the script will not work as expected. I have these warnings written all over, because it’s number one reason for errors.
  • I would recommend to play around with the sample block model and composite files in the project to get a feeling how the script works and after that to use your own block model and composite files. 

I am glad to hear that future versions of Micromine will have a built-in functionality for swath plots. It would be very useful. However, I believe this script will still have its place, because the Excel provides a lot more functionality to make your graphs more presentable, since Micromine has a limited functionality to the graph formatting.

I hope you will find this script useful. Any suggestions and questions are welcome and I will do my best to reply to them.



Attached files:


2018-10-05_Swath Plot Template.7z - Micromine project with the Python script and Excel parameter spreadsheet

Installing Python for Scripting in Micromi....docx - Instructions how to install Python for Micromine scripting

+1

Anyone using the swath feature in 2020.5? I'm a bit confused why it's so slow and seemingly resource heavy, is anyone else experiencing this, these seem like simple calculations, instantaneous using excel pivots?

Also the legend output is so literal it not legible, I don't understand why there is no "Label" field like histogram multivariant.

Building on the above point I'm confused why the graphing function throughout MM seems patched together, consistency is usually MM's strength but for graphing it's no that consistent, here are a few examples.

Swaths: Multiple files allowed, filters allowed but no label?

BoxPlots:  Multiple files allowed, filters allowed, and labels yes.

Histogram Multivariate: No multiple files no filters per field allowed, and labels yes.
Why would all of these not have the functionality?

+1


Hi Geoff.


If you use swath plot without weighting it will be faster. I faced the same situation as you are facing now. Swath Plot in 2020.5 version is way better than previous versions as we had to use macros and perform many operations. What i realize in the majority of situations that've seen is that weighting the block model to generate a swath plot will not be very different comparing to not weighting.

To get a better graphic result I generate a report file.

This is my output 

So I clean the report manipulate  it to use in multipurpose chart function. 

Then I go to Display-Multipurpose chart

Here you can display the way you like it.

This is my output.


See if it helps.

Cheers,

Pedro

I have been using 2020.0 for swath plots (non weighted) and they have been running quickly to generate the plot. Most of the time my swath plots have been 2-4 input files.  I have not moved over to 2020.5 to test this, but will be in the near future.

Pedro, you have a nice workflow so thanks for sharing.  A follow-up question on your workflow: when you said you 'cleaned' and 'manipulated' the output report file, could you provide a bit more details on your you were able to get the two estimation fields (_Krig, _NN) and the 'Start (swath space)' as you show in the screenshot.  Thanks in advance.

Pedro, thanks for the tips.
I removed the weighting for the blocks and the speed improved instantly, I left length weights for samples and it is still fast, so there must be an issue with the block volume weighting calculation.

Hi Geoff,

The reason with swath plot being slower with volume weighting is that it is doing rigorous volume factor calculations for each block to ensure the correct weights are applied.


Volume weighting  is really only needed when the block model is not regular (has been subblocked) or the swath corridors are not aligned with the block model orientation and parent block grid. (ie. the swath grid does not align with the BM grid)

So Regular BM with aligned swath corridors is safe to do with just block centres and not volume weighting - and is fast and equivalent to what you can achieve in Excel  (if you have less than 1 million blocks)

Note that we also correctly handle rotated block models (the same rules above apply) 

Hi Aaron.

Follow a drop box link with a video showing how I edit the files and plot the swath  in Multipurpose chart.

https://www.dropbox.com/s/o87acjlkdcjfwab/Swath%20Plot%20Multi-Purpose%20Chart.mp4?dl=0

Regards