0

Multi core cpu processing of large datasets

daniel_logan 5 years ago in General updated by Scott P (Moderator (EN)) 5 years ago 7

i'm using MM 2014 with an i7 processor, when I load large datasets only the one core is doing all the work, is there a way (or version) of MM that can use the full capabilities of theses CPUs

Hi Daniel,

Can you elaborate a bit more on your scenario. What exactly do you mean by "load"  - do you mean when you load data into Vizex?

What data types are in your large dataset? (Wireframes, blockmodels, Drillholes, images)? and how big are they?


Do you do any processing on this data, or is just load for display? If for display then what display options do you use (perhaps a screenshot would help to understand)

What are the specs on your computer with regards to RAM and graphics - is you data stored locally or on a network drive?


I’m opening a 70gb block model and its takes an hour to reload into vizex every time I change the filters or hatch patterns. In win10 64bit,  I can open the resource monitor and the cpu window show all 12 threads but 11 are inactive and 1 is running at nearly 100%. The same happens when I run any of the modelling tools. Is there a way to use more resources from the CPU, its an i7 8700K, with 24gb ram


That sounds like a very large block model, do you have an indication of the number of records in the file? 70gb of data is a lot to read from the disk to process so the disk would be the first bottleneck. Such a large number of blocks will also stress the graphics card.


I see from the screenshot that it is generating the 3D index. This should be a once off operation and will take some time to process. If will need to be remade if there are any structural changes to the blockmodel (i.e. more subblocking etc). The 3D index will improve all subsequent processing and loading times. Currently this process is not enabled for multithreading. 


MM2016 and MM2018 do include multithreaded modelling tools for IDW and Kriging. These newer versions also have more efficient rendering for modern graphics cards.


If you have any more queries contact support@micromine.com and the support team may have other suggestions on optimising the blockmodel size to improve loading and processing times.

Thanks Scott,

I'm using MM 2014, and a 1050ti graphics card, and a SDD c:\, if anyone else reads this and wants to know my hardware config. I'm tempted to upgrade to a 1080ti as they have a lot more and faster onboard ram (11gb), but it might be overkill. 

I also am presently working at times with a 80gb and a 60gb model and it's good to hear another use chime in.  They are not my models, but clients give me this stuff at times to use or evaluate.  Micromine is very slow to load AND re-load these large models and it is correct--only a fraction of the computer memory and CPU is utilized. Based on my experience, models >10-15 gb bog Micromine down more than Brand Y. The indexing generates enormous files that are stuffed in another folder.  If you don't watch it they will fill up your hard drive because you have to manually delete them.  Yes, every little change to the display requires a long wait despite indexing.  I have reported this is as an issue on the support mailbox, along with similar one for wireframe assign on big models and wireframes,  and the Modelling menu reporting using block factors.  My advice is for now to open up another instance of Micromine so you can keep working on something meanwhile the model is loading.  This until the capability of your computer can be utilized better by the software.  Getting a faster card won't help much I'd wager.  I have an older laptop and a newer one with much more RAM and graphics memory and the difference in performance is not significant.


A workaround meantime might be to slim the model down if it's one of yours and you have that option.  You can use Micromine to "optimize" a model, basically composite unnecessary subblocks.  It's not always an option.

Hi Donald, i'm in the same boat as you, these models get handed to me, they're not very practical but none the less its a common occurrence. Its a good idea to keep opening up MM instances, I tested it and the performance doesn't seem affected that much. perhaps a later version of MM will be fully "multi-threading".

Daniel,  your 1050ti should work fairly well (I assume once the data is loaded it is fine). The real problem is with the block model index file. This file is needed to allow for faster loading and processing of the model in a variety of operations, but the index can also be invalidated by some operations which change the structure of the data (i.e. adding more subblocks). The index invalidation necessitates the index to be rebuilt. In MM2016 and MM2018 we have made real improvements to the performance when generating the index file, and to minimise the scenarios when the index file needs to be invalidated and rebuilt from scratch (MM2014 did this too often). So I think it would be an improvement to upgrade from MM2014.


We are also investing more time in to improving the index building (we are planning to investigate the feasibility of multi-threading the building of the index). This also spills over into improving some of the BM processing options (like Wireframe Assign and BM Report) to use a new improved algorithm that makes better use of the block model index to improve data processing performance. The first iteration of this new algorithm is already in MM2018 in the Wireframe Grade Tonnage report function.


I would definitely recommend that you contact support@micromine.com the the support team can assist to review the blockmodel and may have suggestions to optimise it. This can also help the development team to understand this scenario and ensure it is considered in optimisation work