+3

Importing Very Large (waste of space) Block Models

Geoff Elson 3 years ago in Resource Estimation updated by Michael Tangwari 3 years ago 6

Not sure if others run into this problem, but when importing block models from third parties there is a strange insistence to export the entire block model, corner to corner!

To name names, this seems to be common with our Vulcan and Gemcom friends. This can produce csv files 2-10gb when there is only relevant information for 100-300mb.

MicroMine seems to hang-up and often fail on these extremely large csv imports ~+8gb. The failures, unfortunately, often occur after several wasted minutes attempting to import the csv. Is anyone else experiencing this?


I have thought about asking for models filtered on export, this is not an option in most cases. It's unclear why so many estimators don't think 10gb files waste everybody's time, but here we are.

As a bit of a work around I have been pre-filtering with Python,  ImportLargeBlockModels, I then import the filtered csv as usual.


Would anyone else like or use a feature that filters a csv on import based on null or zero attributes?

Definitely would be useful, dealing with big models can be problematic (I had a 14Gb model yesterday). There is the 'remove empty records" option for Surpac binary and UBC models but I have never found this that useful as there is generally a default of some sort that means the whole thing comes in anyway. Whilst importing the whole model may be unavoidable at times (for dealing with GeoMet and waste models etc) a way of filtering the background null data would be nice.

Agreed, I've had a few of these recently too, usually the size of an English country and extending to the moho, small parant blocks (cos more accurate right?) and subblocked to 0.1 m. Adding a CSV option to import BM would be useful, with the option to filter by given fields by expression. 

Hi guys

Thanks for the feedback. I spoke to the developers about this issue-  we are keen to address the problem. 

Adding a filter option to the import tool may not have the desired effect as the unwanted records would still need to be processed. However optimizing the parsing mechanism should lead to significant improvement  in processing time. 

We would be very grateful if one of you was willing to go through the hassle of sharing a large BM file with use for testing. 

Regards

I have a 1.25Gb file I can share with you Andrew - probably a fairly standard affair - from Surpac (out of Vulcan) - full volume model with background data in most fields.

Thanks Ron

I'll get in touch with you via email to make arrangements. 

How do I use the python script. I have used one before. I also have a large 9GB block model that is taking forever to open.