Two new charts have been added to the General Analysis application related to assessing hazard with the frequency-magnitude relationship. The new charts plot various hazard parameters over time, or, by time of day:
Charts / Time Series / Hazard over Time
Charts / Diurnal / Diurnal Hazard
The following parameters can be plotted in each chart (maximum two at a time):
Mmin – The magnitude of completeness. The magnitude, above which the dataset is complete.
b-value – The slope of the Gutenberg-Richter distribution, describes how the frequency of events scales with magnitude.
N at Mref – The number of events (N) above the reference magnitude (Mref, user defined). Note for reference magnitudes less than Mmin, N will not reflect the actual number of events in the database, since it is based on the Gutenberg Richter distribution, assuming a complete dataset.
Hazard Probability – The probability of an event exceeding the design magnitude (user defined) within one year.
Hazard Magnitude – The magnitude that, to a certain reliability (user defined), won’t be exceeded within one year. Hazard Magnitude is essentially the inverse of Hazard Probability.
Each chart is generated by breaking up the data into bins and fitting the Gutenberg-Richter distribution. The bin width can be set in the control panel. Since there can be a lot of variability in the data and fitting procedures, there are also controls to smooth the results, with a user defined bandwidth.
The figure below is an example of the Diurnal Hazard chart, showing how the b-value varies based on the time-of-day. The b-value drops from around 1.3 to 0.7 during shift change. This represents a large difference in hazard, which is highly sensitive to b-value (illustrated in previous post).
Note that the hazard calculations assume a constant b-value within the analysis volume. This can result in an underestimated hazard (explained in the Hazard Iso’s blog post). For more accurate results, use the hazard assessment application, where the volume is discretised and the probabilities are integrated together from the small scale, to the larger scale.
If you would like to arrange a root upgrade to get these charts, let us know at firstname.lastname@example.org.
The new background filters have been added to the Hazard Assessment application. The time of day filter can be used to see the effect of removing events during blasting/shift change on the hazard results. You can either view the results in raw or normalised form. The hazard calculations do normalisation for the event rate calcs anyway, to represent hazard in yearly terms. If your analysis period is 6 months, the number of events is doubled to represent a year’s worth of events. When applying the time-of-day filter though, the actual analysis period is less than 6 months, because several hours per day have been removed. Without normalisation, the hazard should always drop when applying the time-of-day filter, because you are removing events, and nothing else changes (i.e. still using 6 months). If normalisation is turned on, the time period that has been removed is accounted for in the hazard calculations. The results then represent accurately the state of the hazard during the relevant times of day.
Normalisation also applies to the short-term responses filter, where events can be removed based on a time and distance from a blast or significant event. In this case the normalisation is a bit more complicated. With the time-of-day filter, the effective analysis period is the same for the whole grid. In this case however, there will be an uneven distribution of space and time removed from the analysis. So, each individual cell has its own effective analysis period, based on how many triggers (and responses) are nearby. The idea is still the same though, without normalisation, the hazard will drop due to the removal of events without adjusting the analysis period. With normalisation turned on, the results will represent the hazard state outside of short-term response regions.
A new chart has been added to the Hazard Assessment app that shows the effect of different short-term response filtering on hazard. The chart works in a similar way as the Track Volumes over Time chart, by computing the hazard over and over again, automatically changing variables with each run. The chart and associated control panel can be found in the Hazard Assessment / Hazard ISO’s window, under the Response Analysis menu. To generate the chart, you need to specify a maximum response time, a time delta, and response distances (up to 6). The hazard will be calculated for each response distance and for each response time from zero to the maximum (at delta intervals). The hazard recorded is the probability of exceeding the design magnitude within the chosen grid, which is the value displayed in the footer of the 3D ISO view. It can take some time to calculate, depending on how many iterations you specify. The video below shows the chart being generated for response times up to 72 hours and response distances of 50, 100 and 150 m.
As mentioned in the last blog post, a stochastic declustering algorithm has been implemented in mXrap to separate events into ‘clustered’ and ‘background’ components. It can be useful when designing seismic exclusions and re-entry procedures to separate seismicity that occurs in short bursts from seismicity that has low variability in space and time. Short-term exclusions cannot be used to manage the risk associated with background seismicity, since the hazard inside a potential exclusion would be the same as outside the exclusion. Efficient exclusion and re-entry procedures target areas where seismicity is most clustered and where the seismic hazard to which people are exposed can be reduced with a short disruption to production.
The filter controls for stochastic declustering in General Analysis are in ‘Event Filters / Background Activity’ and a new chart has been added to show the cumulative events of the two components in ‘Charts / Time Series / Declustered CNE’. An example of the cumulative declustered events chart is shown below for a week’s worth of events at the Tasmania mine. In this case approximately 32 % of events have been classified as ‘background’.
The declustering is based on the distribution of inter-event times (time between successive events). The distribution (PDF) of inter-event times has been shown to follow the gamma distribution (Corral 2004). The chart below shows how the events in the example above (black crosses) closely follow the gamma distribution (red line). Hainzl et al. (2006) showed how to estimate the rate of background events from the gamma distribution, based on the mean (µ) and standard deviation (σ).
Background Proportion = µ2 / σ2
Background seismicity is generally assumed to be stationary and Poissonian. In other words, the average time between events is constant and known, but the exact timing between events is random. Each event is assumed to be independent and not affect the occurrence of other events. The inter-event time of a Poisson process follows the exponential distribution (green line).
The event distribution clearly deviates from the background distribution for small inter-event times. This deviation is caused by the clustered component of seismicity. The distribution of small inter-event times corresponds to the inverse distribution (yellow line), which is explained by sequences that follow the Modified Omori Law (MOL). In this case the slope of the distribution corresponds to the MOL with decay parameter, p ≈ 0.8.
The declustering method was described by van Stiphout et al. (2012). The probability that an event is part of the background (purple line) is calculated based on the inter-event time and the ratio between the background and gamma PDF’s. Events with small inter-event times are more likely to be clustered events. Events with large inter-event times are more likely to be background events.
It is important to note the random component in the declustering process. Each specific event may be classed as either ‘clustered’ or ‘background’ each time you run the declustering, although the overall proportions will remain the same (hence the ‘stochastic’ in stochastic declustering). There is also no consideration given to the spatial clustering of events, all events are assessed together in the time domain. There is also no consideration given to the magnitude of events.
The rate of background events is assumed to be constant although in reality the background rate will slowly vary over time, related to changes in system sensitivity, general rates of extraction and different mining locations. To account for long-term fluctuations in background rate, events are broken down into groups, and the background proportion is computed separately for each group. Groups of events are kept as small as possible, with a minimum number of events and minimum time period (user defined). The background rate is constant within each group.
Aside from General Analysis, the stochastic declustering process has been added to the Hazard Assessment, Short-term Response Analysis, and Seismic Monitoring apps. The background filters in the hazard app can be used to compare the seismic hazard of clustered and background seismicity (as per below). Background rates are also calculated for triggers in the short-term responses app and for the reference rate in the activity rate monitoring window.
For those wishing to read more about the declustering process, the CORSSA article by van Stiphout et al. (2012) is a good summary of many different approaches used in earthquake seismology, including the method described here.
We have added some new event filter options to General Analysis related to ‘background’ activity. ‘Background’ events are generally defined as having low variability in space and time. The new background filters aim to identify events that are clustered in space and time and the user can either display the ‘clustered’ or the ‘background’ component of seismicity.
There are three ways of classifying clustered events; by time-of-day, by proximity to blasts and significant events, and by a stochastic declustering procedure. Stochastic declustering will be explained in a separate blog post.
With the time-of-day filter, you can specify up to five periods of the day, to define increased activity around shift change/blasting times. Times are entered in hours, e.g. 5:30pm = 17.5. Events within these periods will not be shown by default but you can toggle/invert the time-of-day filter to only show events inside the time-of-day periods (and hide events outside).
With the short-term responses filter, you can define a time period and spherical radius around blasts and significant events to filter out events. Use the normal blast filter to control which blasts are considered. Significant events are considered if they are within the current base filter, and above the specified magnitude. Note that the significant event itself is not filtered out (it is treated as a background event, not a clustered event). Just like the time-of-day filter, you can toggle/invert the filter to only show the responses, and hide events outside the response windows.
The last filter option is an automatic classification system for separating background and clustered events. You can toggle between each component of seismicity defined from stochastic declustering. Watch out for the next blog post if you are interested in the details of how this method works.
This is a new addition to the General Analysis application. Find the panel under ‘Event Filters / Background Activity’. If you don’t see this option, you need a root upgrade. Get in touch with support to arrange an upgrade by emailing email@example.com.
CheX is mXrap’s new system for setup and support. It is a checklist-based website which helps users to follow step-by-step instructions to complete tasks such as setting up a root folder, applying a patch and doing a default backup. The login details for CheX are the same as those for mXsync (if you’re unsure what your details are, send us an email at firstname.lastname@example.org). Keep an eye out for CheX, as it could come in handy for you in the future!
The new website has an improved interface and revamped training programme, along with some new videos (including event density isosurfaces and system design). The new login system allows us to show you which videos you’ve already watched.
To access the new site, go to www.mxrap.com/mxvideos and log in with your Google account, Microsoft account or with any other email address. If you have any questions about the new videos website or anything else related to mXrap, email email@example.com.
To create your videos, simply open the ‘Video Export’ panel. You can then choose whether you want to take a video of just the General Analysis 3D view, or of any other tool in General Analysis. To make a video of the General Analysis 3D view, you just need to choose where to save the video, give it a name and a Frames Per Second (FPS). Then you choose the time slice settings (start, slice width and number of slices). You can also make these slices overlap in order to create a smoother video. Once these have been set, simply press the ‘Export Video’ button and your video will be created.
The generic video creation works in a similar manner, however for each tool you want to capture, you will need to go to the eXport tab on the right hand side and choose the location for the video, set the FPS and press start. Then you set the time slicing variables and press the Export Video button and all the tools which you pressed start for will have a video created for them.
Moment tensor analysis is a topic that carries a decent level of uncertainty and confusion for many people. So I’m going to lay it out as simply as I can. For this post, I’m not going to go into too many details on how moment tensors are actually calculated. But, I’m going to summarise the things I think are most important for geotechnical engineers to know for interpreting moment tensor results.
OK, so, what is a moment tensor?
A moment tensor is a representation of the source of a seismic event. The stress tensor and the moment tensor are very similar ideas. Much as a stress tensor describes the state of stress at a particular point, a moment tensor describes the deformation at the source location that generates seismic waves.
You can see the similarity between the stress and moment tensors in the figure below. The moment tensor describes the deformation at the source based on generalised force couples, arranged in a 3 x 3 matrix. Although, the matrix is symmetric so there are only six independent elements (i.e. M12 = M21). The diagonal elements (e.g. M11) are called linear vector dipoles. These are equivalent to the normal stresses in a stress tensor. The off-diagonal elements, are moments defined by force couples (moments and force couples discussed in previous blog post).
Producing a moment tensor of a seismic event requires the Green’s function. This function computes the ground displacement recorded by the seismic sensor based on a known moment tensor (the forwards problem). A moment tensor inversion is when the inverse Green’s function is used to find the source moment tensor based on sensor data.
Sure… but what’s with the beach balls?
It’s pretty hard to interpret a 3 x 3 matrix of numbers, so moment tensors are usually displayed as beach balls, either 2D or 3D. I will mostly discuss the 3D case; the 2D diagram is just a stereonet projection of the 3D beach ball.
The construction of a beach ball diagram is very simple. For each point on the surface of a sphere, the moment tensor describes the magnitude and direction of the first motion. If the direction of motion is inwards, towards the source, the surface is coloured white (red arrows). If the direction of motion is outwards, away from the source, the surface is coloured black (blue arrows). Where there is a border between black and white on the beach ball surface, the direction of motion is tangential (purple arrows). The direction of motion across the border is white-to-black.
The figure below shows the first ground motion on the beach ball surface, split into radial and tangential components. The lengths of the radial and tangential arrows are proportional to the strength of the P and S waves respectively. P-waves generally emanate strongest from the middle of the white and black regions. S-waves emanate strongest from the black-white borders.
The location of the pressure and tension axes can be confusing. If you look at the S-waves diagram, the tension axis is in the compressional quadrant. However, it does make more sense from the P-waves diagram. The black/white convention can also be counter-intuitive for some. ‘Black’ holes pull things inwards, the sun radiates ‘white’ light outwards, but the beach ball diagram is the opposite of that. I’m sorry I don’t know why this is the convention. Perhaps seismologists are Star Wars fans… Vader wants Luke to come to the dark side, and so this is the movement direction that he is tempted towards… that’s all I got 😊.
Right, but what can I learn about the event mechanism?
Even with the beach ball diagram, it can still be hard to interpret the geological or physical mechanism of the event. This is why the moment tensor is often decomposed into its constituent elementary source mechanisms. To decompose the moment tensor, the matrix is rotated to zero the off-diagonal elements. This is just like finding the principal axes of a stress tensor, by zeroing the shear elements and leaving the normal stresses. So, every moment tensor can be expressed as three linear vector dipoles (orthogonal), rotated to a particular orientation. These three dipoles are referred to as the P (pressure), B (or N, neutral or null) and T (tension) principal axes.
In combination, the three dipoles either result in an overall expansion or a contraction of the source volume. If the source is explosive, the largest dipole direction is the T axis and the smallest dipole is the P axis. These are reversed for an implosive source. Although, for a pure isotropic source the axis orientations have no meaning.
The isotropic component is the portion of the tensor that represents a uniform volume change. Only P-waves radiate from a purely isotropic source. A positive isotropic component is an expansion/explosion. This can be a confined blast or possibly rock bulking. A negative isotropic component is a contraction/implosion. Any pillar burst, buckling or rock ejecting into a void will likely appear as an implosion, given the path of the recorded waves around the void, all first motions will be towards the source.
When the isotropic component is removed from the moment tensor, the remainder is the deviatoric component. The deviatoric tensor results in displacement that has zero net volume change, i.e. equal movement in, equal movement out. The underlying geological process to the deviatoric component is a general dislocation of a fault. The general dislocation can be a mix of shear and normal dislocation (although still with no net volume change). To better interpret the relative proportions of shear and normal displacement, the deviatoric component can be decomposed into the DC and CLVD elemental sources.
Double Couple (DC) source
The DC source is a pure shear dislocation. It is referred to as a double couple because there are two force couples and two (alternate) fault plane orientations that equally model the expected displacement. This notion was discussed in a previous post. The shear direction on the fault is from white-to-black. You can review the orientation of the two planes in relation to your site geology. It may be the case that one of the planes makes more sense than the other or you can find the specific structure.
A pure DC source has two equal and opposite linear vector dipoles. The third dipole is zero (B or null axis). The embedded video shows the direction of first motions from a pure DC source. As mentioned already, motion is inwards for the white regions, outwards for the black regions and tangential across black-white borders. Radial movement radiates P-waves, tangential movement radiates S-waves.
Compensated Linear Vector Dipole (CLVD) source
The CLVD source is a normal dislocation on a plane. The normal displacement from one linear vector dipole is ‘compensated’ (hence the name) by opposing displacement from the other two linear vector dipoles so that there is no net volume change.
For a positive CLVD source, a single tensile dipole is compensated by two compressive dipoles.
Vice-versa for a negative CLVD source.
A pure CLVD source would imply a poisson’s ratio of 0.5, which is more like chewing gum or toothpaste than rock. So there is no geological example of a pure CLVD source. Although, it can make sense as a mixed source event; i.e. part isotropic, part CLVD. This event mechanism may be dominant for confined pillar crushing events. The Hudson chart indicates two key points that are a combination of isotropic and CLVD sources. A single linear vector dipole (other two dipoles are zero) decomposes to a one-third isotropic source, two-thirds CLVD. A pure tensile crack mechanism decomposes to a source 55% explosive, 45% positive CLVD.
The Hudson chart is a useful tool to visualise the moment tensor decomposition, seeing the relative proportions of the isotropic, DC and CLVD elemental sources. The vertical axis is the isotropic component, from -100% (implosion) to 100% (explosion). The horizontal axis is the deviatoric decomposition, from +100% to -100% CLVD, with 100% DC in the centre (0% isotropic, 0% CLVD). The outer border is the 0% DC line.
There are many factors that can lead to uncertainty in determining the first motions of waves recorded at sensors and the final moment tensor solution. Seismic waves travelling through the rock mass divert around mining voids and go through numerous refractions, reflections and superimpositions. Noise at the sensor site can also influence the first motion analysis and the solution can also be very sensitive to poor P and S picks. Good moment tensor solutions require a sensor array that is well dispersed, covering the focal sphere in all three dimensions.
Be aware that each moment tensor solution is not going to be of equal quality, particularly small events with few sensors used. Your seismic service provider should provide you with some measure of solution accuracy to help assess this. This might be based on an assessment of the sensor configuration or a misfit analysis between the observed waveforms and the theoretical waveforms generated synthetically from the moment tensor. In general, it is better to look at trends and a convergence of evidence across multiple events rather than a single moment tensor solution. Even if you are investigating a single large event, it is probably worth reviewing the mechanisms of aftershocks and previous events in the area.
It is important not to jump blindly to the nodal plane solutions and to consider the decomposition of the moment tensor in your analysis. If the source is only 5-10% DC, the nodal planes are not very significant. The P, B, T axis are also less important for strongly isotropic sources so keep that in mind for stereonet analysis.
And one last warning about CLVD components. In tests where random noise is added to an initial noise-free, moment tensor inversion of a pure DC source, the noise serves to increase the CLVD component. So it is hard to be sure when CLVD shows up in a solution that it isn’t just noise related. In fact, seismologists often evaluate the accuracy of a moment tensor solution by how large the CLVD component is. A good solution would have a low CLVD component. This is earthquake seismology though so the range of rock mass mechanisms is less diverse than the mining environment, DC is often an assumption for earthquakes.
Anyway, hopefully that clears up at least some of the mystery around moment tensors. Feel free to contact support with any questions. For those looking to read up further I recommend this manual by Dahm and Krüger (2014) and the references therein. They go into much more detail on alternate decompositions and the moment tensor inversion process.
The isosurfaces for b-value have been upgraded in the latest root. There is now much more control over the isosurface levels. Up to 5 iso’s can be plotted for user defined ranges. A linguistic name can be assigned to each level and displayed in the legend. A new video has been uploaded to the Hazard Assessment page that explains the new b-value isosurfaces.
Moment tensors have been added to the General Analysis application in the recent update. Beach balls and principal axes can be viewed in the General Analysis 3D view. There is also a separate Moment Tensor window with a number of stereonets and mechanism charts. Two new training videos have been uploaded to the General Analysis page that walkthrough the new tools.
IMS sites should have moment tensors loaded in with the events table automatically. ESG sites can add moment tensors from CSV files in the Events Import app.
The seismic monitoring applications have been modified in the latest root update. There are now two seismic monitoring applications and a third app for setup. The two different monitoring apps are required because not all sites wanted to use exclusion procedures. The monitoring app with exclusions has a different user interface structure and a few extra features.
A new window has been added to monitor activity rate so that both monitoring apps have windows to monitor events and to monitor activity rate. The activity rate monitor is the same in both apps. The Setup app is used to initialise and modify all of these monitoring windows. New training videos for seismic monitoring have been uploaded.
Activity rate is monitored on a short-term and medium-term basis. Short and medium-term hotpots are listed in two tables based on pre-defined activity rate thresholds and users can switch between short and medium-term monitoring modes. Each hotspot can be highlighted and there are automatic controls for level plan and longsection views. An alert popup message is triggered when a new hotspot has been detected when there were no hotspots beforehand.
The following steps are required to setup the activity rate monitor.
1. Define Grid. Activity rate is assessed based on a grid. Specify the areas in your mine that you want to operate the activity monitor. Note that tighter grid spacings will slow down the calculations and reload speed.
2. Reference Rate. The reference rate is the basis for the activity monitor thresholds. The reference rate may vary by location since the “normal” activity rate for a particular area will depend on the system sensitivity and rock mass response. Try and select at least a few months of events with consistent data quality and where there has been no major change in mining conditions. Use the time-of-day filter to remove the effects of blasting if desired (see diurnal chart). You can also add an extra factor to the reference activity rate or simply use a constant reference rate across all grid locations.
3. Monitoring Parameters. You need to specify a radius around each grid point to measure the activity rate and define the time period that constitutes short-term and medium-term activity rate monitoring. You can also specify a minimum number of events to trigger an activity rate caution or alert. You probably don’t want to trigger an alert whenever any event happens.
4. Activity Thresholds. This is where to define what activity rates will trigger a caution or an alert. You can either specify activity thresholds based on a probability or based on a ratio. Ratio-based activity thresholds are a simple linear function of the reference rate. Probability thresholds are based on the work of Marsan (2003) on detecting seismicity rate changes between two time periods. The P-value refers to the probability that the ratio of the current activity rate to the reference activity rate is greater than the activity ratio (r). If the activity ratio is one, the P-value is simply the probability that the current rate is greater than the reference rate. The probability-based thresholds account for the different uncertainty introduced by using a very short time period (e.g. 30 mins) to assess the activity rate compared to a reference rate measured over a much longer period (e.g. 6 months). For further reading I recommend the Marsan and Wyss (2011) document linked below.
Once you have reviewed all the parameters and thresholds, the settings must be saved before they will be applied to the activity rate monitoring window.
Another new app has been added to the Seismic Suite for the general analysis of short-term seismic responses. There are multiple features to analyse the short-term response to different triggers (events or blasts) in time and space. In later versions, this app will replace all of the tools in the previous “Omori Analysis Tools” app but version 1 currently only replaces the old “Response to Blasting” window.
There are two main aspects of the app, the simple response viewer and trigger assessment windows. New training videos have been uploaded for the new app.
This window is for the assessment of short-term responses. Triggers must be selected (ticked) from the list and response events for the selected triggers can be assessed with a number of tools.
Responses 3D — View the trigger locations and response events within range. Adjust the spheroid controls to include/exclude events related to the trigger. “Nearby events” are events just outside the spatial or temporal range of the trigger.
Time after Trigger — Chart the events as a function of time after trigger. Can plot events as a histogram or cumulatively. The time bin used for the histogram can be adjusted. When multiple triggers have been selected, can either view responses individually or stacked together. The best fit Modified Omori Law (MOL) is calculated automatically, if the calculation fails a message will appear in the control panel. You can override the MOL parameters (p, K and c) in the control panel.
Distance from Trigger — Chart the events as a function of distance to trigger. Options for X distance, Y distance, Z distance, Horizontal distance and 3D distance. You can normalise the chart by the number of events or by the volume under consideration. Normalisation by number of events changes the Y axis to a percentage of events from 0-100%. Normalisation by volume changes the X axis so that the volume increases linearly as the distance increases (expanding sphere). If the cumulative events is linear for the chart normalised by volume, this represents a constant event density.
Density 2D — View the distribution of events in the 2D plane (XY, XZ or YZ). Events are divided into spatial bins and then ranked from highest to lowest density. The grid points are coloured by cumulative events, where the accumulation is from highest to lowest density points.
Density 3D — Similar to the Density 2D plot, uses the same grid spacing controls. Isosurfaces are used to show the cumulative events distribution in 3D.
Trigger Assessment Window
This window is for the analysis of triggers; either blasts, events or user defined points in space and time. If the response viewer is to assess the area of exclusion and time of re-entry, the trigger assessment window is to assess which blasts should have an exclusion and which events should have an evacuation.
Triggers 3D — View triggers in 3D and use the marker style colours and scales to assess the location of triggers that typically have a response.
Trigger Summary Tables — These tables summarise triggers by various characteristics. For example, you can use these tables to assess what type of blasts have the biggest response, what magnitude events typically have aftershocks, how the responses to triggers vary by depth etc. The tables currently use the following parameters to assess potential triggers for exclusion or evacuation:
The root upgrade that coincides with the software upgrade beyond v5.9 includes a new app for creating and exporting models of mine geometry. Stope, cave and development geometry is a fundamental aspect of most geotechnical analysis. Mine geometry also varies over time and capturing these changes is critical in any back analysis or numerical modelling that investigates stability or monitoring parameters over time. This is a utility app to create models of mine geometry that can be exported to facilitate a wide range of applications such as:
Live Surveys — Mine geometry can be displayed as surfaces in General Analysis and other 3D views that automatically update with the current time filters. So when you backdate your seismicity to a year ago, the surveys will only show geometry present at the time.
Volume Calculations — Mining volume can be calculated over time. This allows seismicity and other parameters to be plotted as a function of mine geometry.
Distance Calculations — It is often important to know the distance of large events or damage locations to the nearest development or stope at the relevant time. This is like the distance to survey tool except time is also taken into account. So you can compute the closest stope at the time of damage or the closest development at the time of the large event.
Numerical Modelling Inputs — The geometry models can be exported in a number of formats, including Map3D .inp format.
The export options will gradually be expanded to more modelling formats such as FLAC, RS3 and Wave. Another possible application is to create block models of lithology or geotechnical domains based on lithology contact wireframes.
Training videos have been made to walkthrough the new app. See the page below.
We have started rolling out a new version of the mXrap software and root folder. Software versions 5.9 and above include a few interface changes. The new version is available on the download site but DO NOT download it until someone from the mXrap team has upgraded your root folder. Several root folder changes need to be made simultaneously with the software upgrade. We will be in touch soon to arrange the upgrade (if we haven’t already).
From now on, when you open an app, all available windows are shown at the top of the screen as tabs. The old Windows menu, where extra windows for an app could be opened in separate windows, is now gone. The workspaces, reload data, and undo/redo/save buttons have also moved slightly.
Note there is different styling for folders. Folders can be expanded to a 2nd level with extra tabs.
Most of the apps have had a bit of a face-lift to adjust to the new window tabs. The Hazard Assessment app for example has been split into multiple windows, for easier workflow. We have added info pages to most apps to describe the app and point to relevant literature.
We also have a bit more control over panels, to show, hide and disable certain instructions and controls. We think this will further improve usability going forwards.
New and improved apps
Aside from the interface changes, the root upgrade has also got some new stuff we have been working on over the last 12 months. We will make separate blog posts on each of the new items to explain further. Training videos for the new additions are in progress.
Moment tensors in General Analysis. Moment tensors are now read in automatically with IMS event files. ESG sites can load individual data files they have received from ESG. Beach balls can be displayed in General Analysis with various decompositions and scaling. There is a separate window with stereonets and mechanism charts.
Short-term Response Analysis app. This is an upgrade to the old “Response to Blasting” window in the “Omori Analysis Tools” app. There are more spatial analysis options in this new version. There are also trigger assessment tools for large events and blasts. Previously only blasts could be chosen as a trigger.
Mine Geometry Models app. This is a new app to create 4D models of mine geometry from surveys. Models can be used in General Analysis to automatically display mine geometry and update based on the current time filters. Models can also be exported as Map3D .inp files.
Seismic Monitoring app. This app has been updated. There are now separate apps for event monitoring with and without exclusions. There is also a window to monitor activity rate.
Survey Format Converter app. New utility app to export surveys in a different format. Can also use the decimation tool to export decimated surveys. We’ve also recently made a DXF export option. DXF is an option in this app as well as the plane fitting and mine geometry models apps.
Are your surveys really slow to load? Are you having difficulty rotating your 3D view?
The problem might be the level of detail in your survey files. If the level of detail of survey files is unnecessarily high, it will slow down the 3D view for no reason. This is often the case with stope CMS files. To reduce the level of unnecessary detail in your survey files:
Open the Survey Setup Window (can be found in the General Analysis app) and select the surveys you’re interested in (on the left). Click on Decimation, then turn on the Decimation Override using the tick box. Set the Target Reduction. This number is how much mXrap will try to reduce the size of your surveys (i.e. with a Target Reduction of 90%, mXrap will try to reduce the number of points to 10% of the original size). You should set this to as high a number as possible, while still being able to see the level of detail in your surveys that you need (somewhere between 60 and 90% often improves the speed of the 3D view dramatically, without making the surveys look awful). Don’t forget to Rebuild the Cache and hit Save before looking at your 3D view!
If you need a refresher on updating surveys, watch the Survey Setup training video here.
Are you unable to see your surveys, but can’t work out why? Are some of your surveys located a long way away from the rest of your mine when they shouldn’t be?
This could be due to the input configuration of your survey files. The order of the co-ordinate components (Easting, Northing, RL) are sometimes different for some survey formats, so we have to ‘flip’ the X and Y co-ordinates. This is often the case for DTM files. To fix this:
Open the Survey Setup Window (can be found in the General Analysis app) and find the surveys with an issue. Click on Input Config, then turn on the Override using the tick box. Click the swap button between X and Y to change whether X and Y are the first or second column. Don’t forget to rebuild the cache and hit save!
If you need a refresher on updating surveys, watch the Survey Setup training video here.
The grid-based hazard calculations in the Hazard Assessment app were discussed in a previous post. The Iso View describes the hazard at all locations within the mine but when you are considering the seismic risk for a particular work area, large events and strong ground motions may come from multiple sources. The Excavation View estimates the seismic hazard associated with working areas (minode locations) in a few different ways as described below.
P [ ML within R ]
The P [ML within R]
marker style is the probability of exceeding the design magnitude within the
design distance (R) of the minode location (per year). This is the simplest
minode hazard estimate and you will notice the other marker styles take a bit
longer to calculate because they include more complex ground motion (PPV’)
As discussed in a previous post, the grid-based hazard calculations result in a probability of exceedance assigned to each grid volume. So, for a design magnitude of ML2, the annualised probability of exceeding ML2 within each grid cell volume is computed. To compute the probability of exceeding ML2 within R of a minode, the exceedance probabilities for all the grid cells within R are integrated together.
In the 2D example below, there are seven grid cells within R
of the minode. Let’s say each of these grid cells has a probability of
exceeding ML2 of 1 %. Then, the probability of exceeding ML2
within R of the minode would be:
P [ML2 within R] = 1 – (1 – 0.01)7 = 6.8 %
The hazard for a single minode doesn’t help you much. You really want to know the hazard for the whole mine or for any possible work area. To compute the probability of exceeding ML2 within R of multiple minodes, you just integrate all the grid cell probabilities within R of any minode. This is illustrated below for a tunnel length with three minodes. Each grid cell is only counted once. In the hazard app, the footer of the Excavation View shows the P[ML within R] for all minodes. You can also use the selections tool to select any combination of minodes and the integrated hazard will be shown in the footer. The Volume Hazard tool calculates the P[ML within R] for any minode within each of your filter volumes.
As you can see from the illustrations above, the grid cells that are included “within R” are based on the distance to the grid centre point. This is why we recommend using a value for R that is larger than the grid spacing you are using. If you specify R much less than the grid spacing, you may see some odd artefacts where some minodes have much fewer grid points associated than others.
Ground Motion Hazard
The other minode marker styles in the Excavation View express the hazard based on the probabilistic strong ground motion (PPV’). As discussed in the last blog post, a Strong Ground Motion relationship is required to calculate the probability distribution of PPV’ based on the event source details and R. For each minode, the ground motion hazard, P[PPV’], is the probability of exceeding the design PPV’, anywhere within the volume associated with the minode (the tunnel section, per year).
For each minode, there is a probability of exceeding the
design PPV’ due to a large event occurring within any particular grid cell. To
compute P[PPV’] for a minode, each grid point must be considered and the probabilistic
contribution of each integrated together. The grid cells closest to the minode
(smallest R) are the most likely to contribute the most to the minode P[PPV’]
but high hazard grid cells will have a larger zone of influence because of the
potential for very large events.
The grid-minode combinations with a very small P[PPV’] are ignored in the analysis to speed up the calculation. The minode P[PPV’] is calculated from the remaining grid point combinations. For each minode, the threshold probability to ignore a grid point can be specified from the control panel. Increasing the threshold is slightly less accurate but will increase calculation speed.
The P[PPV’] for each grid-minode combination is computed by discretising the magnitude, distance and PPV’ probability distributions. From the grid-based seismic hazard calculations, the magnitude distribution is known and the probability of each magnitude bin of large events can be calculated. Then, the distance from the grid cell to the minode volume is represented by the R distribution in a similar way to the figure below.
From every combination of ML and R bin, the strong ground motion relationship is used to calculate the full PPV’ probability curve for the minode. Only the magnitude bins that can possibly result in a PPV of interest are included in the calculations. This is controlled by the PPVmin control parameter. The PPV’ probability curve is plotted above PPVmin. The P[PPV’] can be quickly calculated for any design PPV’ values above PPVmin from the probability curve.
Note the example PPV’ probability curves in the figure below. The RED line is the probability curve for a single minode but as with the P[ML within R] calcs discussed above, you will often want to evaluate larger tunnel areas, or even the whole mine (BLUE). The PURPLE line is the probability curve for a small section of tunnel with multiple minodes and indicates the yearly probability of exceedance for PPV’ anywhere along that tunnel section. The GREEN line is the probability curve for the whole mining level.
Clearly the probability of exceedance will tend to increase as longer sections of tunnel are considered. So, even though the P[PPV’] for a single minode is small, the accumulated hazard when considering the whole mine can be much higher. This is why the hazard app displays the ground motion hazard in a couple of different ways other than the individual minode P[PPV’] to make the hazard ratings more intuitive.
The equi-probability zones marker style is a simple ranking of minodes from lowest to highest P[PPV’]. The marker style value is a percentile rather than a probability. For example, let’s say you have 100 minodes, the top 5 minodes with the highest hazard will be red (0.95-1.00). The next top 5 (6-10) highest hazard minodes would be orange (0.9-0.95) etc.
Note that the equi-probability zones do not illustrate an absolute hazard rating, rather illustrates relatively low hazard areas and relatively high hazard areas. There may be cases where the top 5% of minodes are actually low hazard. Conversely, the bottom 5% minodes may still be quite a high absolute hazard. The PPV probability chart plots the curves for each equi-probability zone.
The Cumulative P[PPV] marker style also ranks minodes from
lowest to highest P[PPV’] but the marker values are probabilities accumulated
from lowest to highest. As described previously, to accumulate the hazard for
multiple minodes, the probabilities are integrated together. For example, if
your have four minodes with individual P[PPV’] of 1%, 2%, 5% and 7%, then the probability
of exceeding the design PPV’ AT ANY of the four minodes is:
The Cumulative P[PPV] marker accumulates the probabilities
in the same way, one at a time, from lowest to highest hazard. So for example:
The lowest hazard minode has the same individual P[PPV] and Cum P[PPV].
The second lowest hazard minode has a Cum P[PPV] equal to the combined hazard for the two lowest minodes.
The minode with the median individual hazard has a Cum P[PPV] equal to the combined hazard integrating all lower 50% of minodes.
The highest hazard minode has a Cum P[PPV] equal to the accumulated hazard for all minodes.
The Cum P[PPV] marker has the benefit of showing both the cold-to-hot scale of all minodes as well as an indication of the absolute hazard.
The Probability Class marker style is similar to the
Cumulative P[PPV] marker style, except you can specify your own colour scale
and classes and give them a name. This can help to communicate changing hazard
areas to operational personnel.
You can see the classes in the risk matrix. You should notice that the Cum P[PPV] corresponds to Prob Classes according to the numbers in that matrix.
For more details on the ground motion hazard calculations see Wesseloo (2018).
The Strong Ground Motion (SGM) relationship is used to calculate the Peak Particle Velocity (PPV) generated by a seismic event. You may also hear this referred to as a Ground Motion Prediction Equation (GMPE), but only the maximum velocity is estimated, i.e. the strong ground motion, rather than the full, complex wave motion.
The PPV is generally calculated for a specific location based on:
• distance to the seismic event (R)
• source magnitude (ML)
• source radius (Rs)
• static stress drop (SSD)
The source radius can be computed as a function of magnitude and the adjustment due to SSD is sometimes excluded. So PPV is often simply a function of ML and R.
You have probably seen the SGM relationship illustrated in a similar way to the figure below. What is sometimes not recognised is there is an associated uncertainty in these relationships. In the case below, the PPV values are based on a 10% chance of exceedance for a given ML and R.
The wave motion from a seismic event through the rock mass is highly complex and uncertain. So, for a given ML and R, the PPV is not a single value but a probability distribution. This is illustrated below, for a ML2 at a distance of 100 m, the PPV distribution is plotted along with the 10th, 50th and 90th percentile values for probability of exceedance.
There are many factors that contribute to the uncertainty in ground motion that results from a seismic event. The ground motion does not radiate from a seismic event uniformly. Each source mechanism has its own radiation pattern where the magnitude of the ground motion varies depending on the direction. The radiation pattern also differs for the P-wave and S-wave.
As the body waves radiate outwards from the source they attenuate but rock mass anisotropy affects the rate of attenuation. The wave attenuates faster when cutting through lamination than it does when travelling along (parallel) to the bedding or foliation plane.
Excavations, different lithologies and major contacts and discontinuities will create reflections and refractions. The wave will split into a separate P-wave and S-wave when it reflects off a boundary or refracts into a new medium. Multiple waves can superimpose to create stronger ground motions that the individual waves.
The SGM relationship does not give the PPV expected on the excavation surface. The SGM equation is based on the recorded PPV at sensors which are normally installed well into the rock mass, away from the excavation surface. So, the calculated PPV includes the uncertainty from different radiation patterns, natural variability, reflections and refractions but does not include surface effects. The strong ground motion of the body wave, without including the effect of the excavation surface, is sometimes referred to as PPV’. The true surface PPV is the PPV’ with an additional amplification factor applied.
The amplification is due to a couple of different effects of the surface. The amplitude is expected to double at the free surface due to the superimposition of the incoming wave and the reflected wave. The amplification is more than double for a corner pillar due to the closed geometry. The body waves can also interact with the free surface to form Rayleigh and Love surface waves. These waves propagate along the surface rather than through the rock mass. The low velocity fractured zone around the excavation can enhance the formation of surface waves as the seismic energy is trapped between the free surface and the fracturing boundary. Waves also increase in amplitude as they move into a lower velocity medium such as the fractured zone.
In earthquakes, surface waves cause the most damage to infrastructure. The amplitude of Rayleigh and Love waves tend to be higher than body waves. Surface waves also attenuate more slowly, i.e. travel further, since the geometric attenuation is only along the surface rather than all three dimensions. Another common observation with earthquake damage is that buildings on soft soils are more heavily damaged than buildings on solid rock. This is a similar case as the fractured zone around excavations although there is less experimental evidence of this phenomenon in underground tunnels.
If you are interested in reading more, these papers are a good place to start:
• Wesseloo (2018) – Description of the SGM relationship and hazard calculations done in mXrap.
Procedure for site specific SGM calibration
The SGM relationship is used to calculate PPV in the Hazard Assessment and Large Event Analysis apps. The default relationship is from the Canadian Rockburst Support Handbook. This relationship is mostly based on recorded ground motions at Brunswick, El Teniente and Creighton mines and may not be applicable to your site. It is fairly simple to calibrate your own site specific SGM relationship using the data recorded by your seismic system. The PPV is recorded by each sensor for each event. With this data, we have a tool to calibrate your site specific SGM relationship. If you would like to do this for your site, the procedure is as follows:
1. Export your PPV data from your seismic database. We have done this a few times for IMS data. It is called an Event-Trigger query that is a table with a row for each sensor hit per event. Contact firstname.lastname@example.org if you need assistance. At minimum we need the export to include the sensor location, sensor PPV, event location, event magnitude and event static stress drop.
2. Save the PPV data into the #Data folder in your root and run a default backup in mXsync. Contact email@example.com to let us know you would like us to calibrate your SGM relationship.
3. We will generate your site specific SGM equation and send the info back as a patch in mXsync.
4. Apply the patch in mXsync and run another default backup.
As part of the ACG’s Ground Support Guidelines for Rockburst Prone Conditions research project, we have developed an application for damage mapping. It is a web-based application design for use with a tablet. This allows users to do their damage mapping offline while underground on the tablet, then when the tablet is connected to the network, the information is synced with our server and pulled into mXrap (making it a single pass process).
Each damage mapping instance is stored as a separate report. Within each level plan, mine development is segmented into short lengths (approximately 5m) called ‘Tracks’. Information is stored on each of these tracks, to allow the history of each underground location to be monitored.
Tracks in damage mapping web application. Tracks shown in orange are selected for use in the current report
Data collected for each track includes:
Rock Mass Characteristics
Installed Ground Support
Falls of Ground
Assigning ground support in web application
In addition to this information, damage data is also collected with more detail at each point on the profile (backs, shoulder, walls and floor). Damage information is captured in terms of both broad damage scales (i.e. Rock and Support Damage Scales) and detailed observations of individual support element damage. In addition to damage data, information on locations which have not been damaged can also be captured.
Damage data on different points on the profile of the drive (web application)
The application focuses on damage mapping for rockburst occurrences, however will soon be expanded to cater for routine damage mapping, with a site specific configuration allowing mines to choose which information to capture for day-to-day damage mapping.
Once the damage data has been synced into mXrap, the mXrap app is used for visualisation and analysis. The basic viewer window operates in a similar manner to the general analysis 3D view. Users can view and filter their damage locations and colour them by different parameters. Seismic events, blasts etc. can be seen and filtered simultaneously.
3D view showing damage locations coloured by Support Damage Scale (mXrap application)
The user can also select individual tracks/points and see the more detailed damage data that was entered.
Points on profile showing damage (mXrap application)
This includes photos, which allows users to easily organise their photos so that they can look at photos from a specific location underground over time.
The Iso View in the Hazard Assessment application expresses the seismic hazard in two ways.
The current yearly hazard within the chosen grid volume. This is shown in the footer of the 3D view, as the probability of an event exceeding the design magnitude.
The spatial distribution of the hazard. This is highlighted by the hazard isosurfaces.
In the case below, the design magnitude is set as ML2. The corresponding hazard isosurfaces for ML2 can be interpreted as the most likely location for that event to occur.
The ML rating essentially delineates the areas of the mine from lowest to highest hazard. The volume bounded by the ML2 isosurface indicates the ML rating is above ML2. Note that the colours in the legend are slightly different than the isosurfaces’ apparent colour in the 3D view. This is due to transparency effects and viewing multiple transparent surfaces on top of one another.
It is important to note that while the data period can change (6 months in the example above), the hazard calculations are all referring to the yearly hazard. This is a simple matter of normalisation. E.g. if you record 100 events in an area in six months, this area is assigned an activity rate of 200 events per year.
The use of yearly hazard is to help interpretation. Reducing the time period used in the definition reduces the probabilistic hazard and this can be misleading. For example, let’s say you give your mine manager a report every day and it says that based on recent data, the probability that we will experience an event in the next 24 hours over ML2 is 0.77%. You do this every day for a year and each day, the mine manager looks at the number and thinks, “Hmm, 0.77%, that’s pretty small, risk is pretty low”. A daily hazard of 0.77% is the same as the yearly hazard in the example above.
1 – (1 – 0.0077)365 = 94%
The mine manager may interpret the risk more accurately when presented with the same hazard but expressed for a hazard period that is more intuitive.
The current yearly hazard displayed in the footer of the 3D view applies to the entire volume of the chosen grid. We also compute the yearly hazard in the VTM table in General Analysis. So, you might reasonably assume that if you specify a volume in General Analysis the same as the grid volume in the Hazard app, the two numbers should match. In fact, while the probability of exceeding ML2 is 94% in the example above, the same volume and time period in the VTM table gives 86%.
This is because the two calculation methods are quite different. To compute hazard, the main inputs are the seismic activity rate, and the b-value (Mmin and MUL are also required). In the VTM table, a single b-value and activity rate is computed for events within the volume, and the seismic hazard is computed directly. In cases where the b-value does not vary significantly within the volume, this is a reasonable approach. However, in most cases, the b-value varies in space, and this approach tends to underestimate the seismic hazard.
This is illustrated in the figure below. You can represent the full volume with its activity rate and b-value to compute the probabilistic hazard, like in the VTM table. In the Hazard app, the variations in activity rate and b-values are calculated on a regular grid through space (in sub-volumes). While the event search radius for each grid point may exceed the grid cell spacing, the activity rate is normalised and the b-value is assigned to represent the seismicity for the specific grid cell volume. The probability of exceeding the design magnitude within each sub-volume can then be calculated. Then the probabilistic hazard for the full volume can be calculated by integrating together all of the sub-probabilities.
ML Rating – Technical Meaning
As mentioned already, the yearly seismic hazard is expressed as the probability of exceeding the design magnitude. An alternate definition of hazard, is to use a design reliability rather than a design magnitude. I.e. the hazard can be expressed as the magnitude that, to the design reliability, will not be exceeded. We use a reliability of 85%. The ML rating is the design magnitude that would have a probability of exceedance of 15%.
An ML rating is assigned to each grid point to compute the isosurfaces. On the surface of the ML2 iso for example, the ML rating refers to the magnitude that, to a reliability of 85%, would not be exceeded within the standard volume given one year’s seismicity. The standard volume we use is that of a sphere of 50m radius.
Minodes are what we use in multiple places in mXrap if we want to assign information to development. They are just point locations, dotted along your development, in roughly 5m intervals. Things like ground support and PPV hazard are really only relevant for development locations, so minodes are our way of denoting these places. Minodes are also used to calculate the span of the excavation at that point. The tunnel length is also used in the Hazard Assessment app.
Minodes are not generated automatically for new development. The minode calculations use an older generation of code that can’t be used in the current mXrap. So, we need to generate the minodes for you periodically as you add more development. Minodes can be created from floor strings but 3D development surveys work best, the same formats you use for mXrap.
Minode Update Procedure
Add your most recent 3D development surveys to the #Data folder in your root. Include all surveys where you want to show minodes, even if minodes are already there.
Run a default backup of your root folder in mXsync. If you are unsure how to do that, review the “Intro and Default Backup” video on the mXsync page.
Send an email to firstname.lastname@example.org and ask us to update your minodes. Please confirm that you have updated your surveys, run a backup in mXsync, and indicate which surveys are for minode generation. It can take some time depending on other work, so please indicate if it is especially urgent.
We will generate your new minodes and merge all previous information from the old minodes. We will let you know when it’s done via email.
Your new minodes will be sent as a patch in mXsync back to you. All you need to do is apply the update. See the “Apply patch” video on the mXsync page.
Review your new minodes (in the Hazard Assessment app for example) and confirm they are as expected. Then run another default backup in mXsync if you are happy. Contact support if there are any problems.
As mentioned in the last blog post, Energy and Moment are independently calculated based on the displacement and velocity spectra of the recorded waveforms. Another spectral parameter is the corner frequency.
The figure on the left shows the corner frequency (f0) on theoretical displacement, velocity and acceleration spectra. The calculation of corner frequency relies on fitting a reliable source model to the observed spectra.
Many commonly used source parameters are derived from Energy, Moment and Corner Frequency. Below is a quick guide to these parameters, illustrated with an Energy-Moment chart that has events coloured by the relevant parameter.
Length vs Frequency
The corner frequency is indicative of the dimensions of the source (source radius in the case of a circular fault). This is a physical relationship easily demonstrated. In the linked video, you can see and hear the decrease in frequency as the length of the ruler is increased. Another example is the change in frequency resulting from changing the length of vibrating guitar strings. For the same physical reasons, larger seismic events tend to have lower frequencies. The radius of the seismic source is calculated from f0.
In theory, the source volume can be calculated based on the Moment and source radius. In practice however, “Apparent” volume is more commonly used to approximate the source volume. The source volume is proportional to the cube of source radius, therefore any errors in the source radius parameter (or corner frequency) are amplified. The method of calculating Apparent Volume is more stable, based on Energy and Moment.
You know that energy and moment are parameters to describe seismic events. But what exactly is their physical meaning for a seismic event source and how are they calculated?
Moment and energy are both separate (but related) measures of the strength of a seismic event. A similar example is a car engine, the performance is described with two separate (but related) measures: power (hp or kW) and torque (Nm). In a simplified piston and crankshaft arrangement, the torque is the twisting force exerted by the force of the piston on the lever arm (crankshaft). Power relates to the rate at which work is done and how fast the torque is applied (torque x RPM). So, moment, energy and power are all related measures of the system performance.
You might have heard that energy and moment are independent source parameters. This is to distinguish them from derived parameters (parameters that are calculated from energy, moment, corner frequency etc). They are independently calculated but they are not unrelated to one another. Moment is related to the displacement (strain) of the source. Energy is related to the speed at which the displacement happens. In general, higher stress conditions lead to higher rates of displacement and therefore higher energy relative to moment.
What does Moment physically mean?
So, you know that moment is a force applied to a lever arm. You might be wondering, where is the lever arm for a seismic source? In the context of seismic sources, moment is a force couple. Two equal and opposite forces, with a notional distance between them, forms a definite moment.
Let’s look at a force couple applied to a small crack. The images below are displacement results from a simple Phase2 model of a small horizontal and vertical crack. The arrows indicate the direction of the displacement. Notice that the displacement pattern is essentially the same for both force couples. This is why, when you do an inversion from the observed waves trying to model the source, the solution comes down to a double-couple. There is no way to distinguish between the two possible solutions.
The displacement field caused by a dislocation on a plane is fundamentally equivalent to that produced by a double-couple. For a homogeneous and isotropic medium, the moment of a seismic event caused by the shear fracture on a plane is:
M = G x D x A
G = Shear stiffness of the rock
D = Average displacement
A = Area of slip
How is Moment calculated?
We are rarely in a position to be able to measure the area of slip or the amount of displacement. In practice, moment is calculated from seismic waveforms, usually in the far-field (outside the source volume). The Brune model is used to relate the characteristics of the seismic source to the characteristics of the recorded waveform. The model is based on a circular disc (penny) shaped dislocation surface where a tangential stress drop is applied instantaneously, resulting in a shear wave propogating perpendicular to the fault surface.
To compute Moment, a Fourier transform is required to convert the displacement waveform from the time domain to the frequency domain. The frequency content is also referred to as the spectrum of the signal. Moment is proportional to the spectral level (Ω0); the plateau of the displacement spectrum at lower frequencies.
The spectra for each sensor must be corrected for geometric attenuation and decay and the Brune model must be fitted to the signal. Moment can then be computed as:
M0 = 4πρV3Ω0R
ρ = rock density
V = the sonic velocity in rock
R = the distance to the source
In theory, the Brune model is only applicable to the S-wave but in practice, the same method is used for the P-wave. The final Moment for a seismic event is the average of the S-wave and P-wave moment.
M0 = (Mp + Ms)/2
What does Energy physically mean?
While seismic moment is a better description of the intensity of a seismic event within the near-field, seismic energy is a better description of the potential damage outside the source volume. The energy source parameter does not represent the total work done during the event, rather the energy that is radiated away from the source. The elastic energy radiated by a seismic event is only a fraction of the total work done by the source.
How is Energy calculated?
Similarly to moment, energy is calculated in the frequency domain, except energy uses the velocity spectrum rather than displacement. The radiated energy is proportional to the velocity-squared spectrum integrated across the full frequency domain. The total energy for a seismic event is the sum of the P-wave and S-wave energy.
E = Ep + Es
The calculations for seismic energy and moment are complex and there are several assumptions and sources of error such as:
Error associated with integrating recorded wave to displacement in the time domain
Assumptions associated with the Brune source model
Error associated with fitting the Brune model to the displacement and velocity spectra, including when bandwidth limitations of seismic systems result in a poorly constrained fit
Error associated with the calculation of source location (R)
Did you know you can download root folders from mXsync? We have uploaded root folders for Tasmania and Big Bell mines. These sites have closed and the data has been made available for research. Have a look at the “Download a component” training video for a guide to downloading these roots onto your computer. The data can be handy for research projects or just for curiosity’s sake. You may even want to download the Xgames root…. for work purposes of course.
We have added some new features to the Hazard Assessment app to calculate the minode hazard for filter volumes. This works just like the current minode calculations, where you can select minodes and compute the probability, P of exceeding your design magnitude, within R of any selected minodes. The volume hazard refers to the seismic hazard for minodes within the filter volume. The same backdate, backrange, Mdesign and R parameters apply as the existing tools.
Another tool has been added to track the volume hazard over time. Essentially this repeats the volume hazard calculations, stepping the backdate through time and plotting the hazard per volume. Refer to the “Track Volume Hazard” training video for a walkthrough of the new tools in the hazard app.
We will need to upgrade your root before you can use the new tools. If you would like us to upgrade, drop an email to email@example.com. Root upgrades are fairly quick but you will need to give us access via Teamviewer, Webex or similar.
Probabilistic seismic hazard calculations are dependent on the number of events (N) and the b-value. But which has more effect on the hazard result? The chart below shows how seismic hazard varies with b-value for N = 1,000, N = 10,000 and N = 100,000.
The seismic hazard in the chart below can be considered in the following way. For a given time span and volume, if N events have been recorded, what is the probability that one of those events was above Mdesign? In this case Mdesign = ML2.
Seismic hazard increases with increasing N and decreasing b-value. Note on the chart, N = 1,000 and b = 0.9 gives the same seismic hazard as N = 10,000 and b = 1.2 (approx). In other words an increase of 0.3 in b means you need 10 times more events for an equivalent hazard.
So, seismic hazard is very sensitive to the b-value of the area. This is important to consider when looking at daily activity rates. In some areas, 100 events may represent a very different hazard to 100 events in another area if the b-value varies.
Another point of interest in the chart is that for areas with b-values above 2, even very high event numbers represent low hazard.
Yes, this is a frequently asked question…. MUL or MUpper-Limit refers to the truncating magnitude of the Gutenberg-Richter distribution. We used to refer to this as Mmax in the Hazard Assessment app and on the Frequency-Magnitude chart but we found there was confusion caused by Mmax being used to describe multiple things. Hopefully if we refer to MUL or the Upper-Limit Magnitude, this will clear up the terminology a little.
A quick review on the terminology that concerns the Frequency-Magnitude chart and the Gutenberg-Richter distribution:
Mmin – The magnitude of completeness, the dataset is considered complete above this magnitude (property of the data)
b-value – The slope of the Gutenberg-Richter distribution, describes how the frequency of events scales with magnitude (property of the statistical model)
Xmax – The largest magnitude event in the dataset (property of the data)
a/b – The magnitude at N = 1 of the GR distribution (property of the model, maximum likelihood, see previous blog post)
max(m,n) – This is the probability density function, given n events, of the largest event in that n events. This is a property of the GR statistical model. In other words, given a certain GR model, if you record N events, what is the largest event? This is not a single number but a likelihood distribution. The maximum likelihood of the largest event is the a/b value.
MUL – The Upper-Limit Magnitude of the max(m,n) distribution. It is an estimate only and a property of the statistical model.
The truncating magnitude has slightly different meanings in mining seismology and crustal seismology. MUL is usually referred to as Mmax in crustal seismology literature and is generally considered constant for a particular area. In mining seismology MUL generally increases over time given the gradual increase in mining dimensions and loading of the rock mass. For this reason the definition is slightly modified in mining seismology to be the upper limit of the next largest event.
Why do we need an upper-limit or truncating magnitude?
The truncated Gutenberg-Richter distribution, rather than the open-ended distribution, is the most common frequency-magnitude relationship used in mine seismology. If there is no upper limit given to the GR distribution, then to evaluate the total energy of events in the relevant time period, the energy tends to infinity as the relationship is integrated above Mmin. This is clearly unrealistic.
We know there is a physical limit to possible magnitudes since the size of large earthquakes is related to the slip area of the fault and the physical size of faults is limited. Earthquakes on Earth above magnitude 10 (Richter) are essentially impossible given the size of known faults and a magnitude above 12 represents a fault area larger than the Earth itself!
So it is safe to say that MUL for a particular mine is going to be less than Richter Magnitude 10. The question is how much less is reasonable given the significantly reduced physical dimensions in mining.
How do we estimate MUL?
An empirical method of estimating MUL can be taken using a dataset compiled by McGarr et al. (2002) of large events and the largest dimension of the human activity associated with them. The figure on the right comes from Wesseloo (2018) who added a few extra points to the dataset from Australian and Canadian mines. The range applicable to mining indicates rough dimensions between 500 and 5,000m.
Aside from the empirical approach, there are also statistical approaches to estimating MUL. These generally take the form:
MUL ≈ Xmax + Δ
There are a number of different methods for calculating the Δ value. Many of these methods are described by Kijko and Singh (2011). Most of these have been implemented in the Hazard Assessment app along with the associated uncertainty of each method as described by Lasocki and Urban (2011).
It is better to over-estimate MUL than to under-estimate it. In terms of probabilistic seismic hazard calculations, the truncated GR model will always give a lower hazard result than the original GR, for magnitudes approaching MUL. For magnitudes well below MUL, the seismic hazard calculations are the same. In the Hazard Assessment app, we take the maximum of each MUL + σ estimate from multiple methods.
These statistical approaches assume the recorded magnitudes of large events are reliable. Moment is under-recorded for large events if there are no low-frequency sensors installed. The figure to the left comes from Morkel and Wesseloo (2017) showing the effect on the frequency-magnitude relationship, given certain sensor bandwidth limitations.
In cases like this it is best to override the MUL as it is likely to be under-estimated with statistical methods.
While it is important to understand what MUL is and how it effects seismic hazard calculations, it is not something to use for design purposes or to communicate seismic hazard. It is just one part of how seismic hazard is defined. By definition, the probability of an event exceeding MUL is zero, so it isn’t a great measure of seismic hazard.
If you have any questions regarding this topic, or something to add, feel free to leave a comment or send an email to support.
We started making training videos about 12 months ago and feedback has been quite positive. At the last AGM, a suggestion came for a training programme aimed at new users to mXrap. The training videos are currently stored by app but a specific programme would help new users with a logical order for progressing through the training content.
We have made a new page for the Training Programme under the Training tab. The programme is structured in several user levels, from a basic introduction to General Analysis, moving through all the apps and finally to advanced app building tutorials. There are a few links to relevant blog posts and papers that will help users understand some of the analysis concepts. There are also exercise questions in each section for users to complete using their own data.
Charts usually auto-adjust their ranges to the input data. This is often what you want, but occasionally it does make it harder to compare charts with different filters applied. A handy tip is to enable the “Zoom and Pan” option in the top-left. This disables the auto-zoom and pan so then if you change the filter, you can compare the two charts. An example below is the FM chart for all events, compared with the events for just the last month. Enabling the Zoom and Pan means the axes remain the same for both charts.
You can also zoom and pan to a different area. Remember if you scroll the mouse in the chart itself, it zooms both axes together. You can zoom a single axis independently by scrolling on the axis label area.
Most users are probably aware of the Quick Reference Guide in the Cheat Sheets. It lists all the mXrap shortcuts and hotkeys but it is spread over a few pages and can be a bit tough to find what you’re looking for. Below is a one-page Quick Reference Guide for more of a visual lookup of the main controls. The controls for the new Annotations tool are not included but help for those controls is available in mXrap itself. Here is the PDF version of the one-page guide if you want to print a copy.
The a/b value is sometimes used as a measure of seismic hazard but there are some common mistakes made with this analysis and interpretation.
What is a/b?
The Gutenberg-Richter distribution is a statistical model that describes a log-linear relationship between the number of events, N, exceeding magnitude, M.
log10 N = a – bM
At N= 1, M = a/b. The figure below shows an example of a frequency-magnitude chart with the a/b value highlighted.
Does a/b mean anything?
It is important to distinguish between properties of the dataset and properties of the statistical model. The a/b value is a property of the Gutenberg-Richter statistical model but it is defined at a particular data point (N = 1). The a/b value does have some meaning, but that’s really only because the a and b value both mean something (although I’ll come back to the a-value later). In terms of seismic hazard, the activity rate and b-value are the two primary inputs required.
The focus on the magnitude where N = 1 is somewhat arbitrary. The statistical model describes the relative frequency for all magnitudes. It is just as valid to normalise the frequency axis to a percentage i.e. express N as a percentage of the number of events at M = Mmin. So in the figure below, at Mmin, the frequency is 100% and events over M = 1 represent 0.1% of all events over Mmin. Note the a/b magnitude represents approx 0.006% of events. So the magnitude at N = 1 loses its significance. Asking what is the significance of a/b is like asking the significance of the magnitude of the top 0.1% of events? Why not the top 0.01% or 0.001%?
The normalisation trap (or the non-normalisation trap)
The reason the a/b value doesn’t mean much for seismic hazard is because the a-value by itself is meaningless. The number of events, by itself, doesn’t tell you anything about hazard because it has no associated time and space units. It should be pretty easy to understand the importance of normalisation to regular time and space units. If I tell you there has been 100 events, you don’t know anything about what seismic hazard that represents. It could be 100 events in a very small volume, in a very small time period; this would be a high hazard. It could be 100 events in a very large volume over a very long time period; this would be a low hazard. So the important thing for seismic hazard estimates is the event rate density, i.e. the number of events, per unit time, per unit volume. Only then can you compare apples with apples.
One final point. A constant event rate density, and a constant b-value over time represents a constant hazard state. The problem is that the a/b value without normalisation is entirely dependent on how long you have recorded this constant hazard state. The total number of events (i.e. the a-value) continuously grows and so does the a/b value, even though the hazard state is not changing. This is why without normalisation, the a/b is not a measure of hazard.
If you normalise the event count based on the event rate density and a standard time and volume, the a/b value can be a measure of hazard. However, in terms of probabilistic seismic hazard, the probability that the largest event in the database will exceed the a/b value is ≈ 63%, assuming an open-ended Gutenberg-Richter distribution or a very high MUL (MUL >> a/b).
The a/b value is a property of the Gutenberg-Richter model, not of the dataset
There is no special significance to the magnitude where the Gutenberg-Richter model crosses N = 1
The a/b value is a function of the number of events
Without space and time information, the a/b value (and the a-value) are not indicative of hazard
When comparing different times and zones using a/b, you must normalise using the event rate density and a standard time and volume
The probability of the largest event exceeding a/b is ≈ 63%
You can now measure distances in any 3D view in mXrap (version 5.6.0 or later) with the ruler tool in the Annotations tab. The “Annotations tool” training video on the General and FAQ page goes through all the features and the operation. Most operations will probably be for the distance between two points, such as the event-to-survey distance below. You can also extend this to a multi-point ruler for measuring more complicated paths.
The controls are very similar to selection boxes. Control instructions are in the Annotations tab.
Ctrl + Left-click to move/select point
Ctrl + Right-click to rotate/cancel
Space to insert a point
Del to delete point
Other features include:
Snap ruler to point or surface (crosshairs will turn from red to green when snapping)
Point rounding – For selecting points on a grid or post-rounding
User defined labels on vertices or segments
Table of point coordinates, segment lengths, horizontal and vertical runs, trend and plunge
There have been some interface changes made to the mXrap software in versions 5.6.6 or later. The right-hand-side controls have had a bit of a face-lift and now there are separate coloured tabs for exporting, selections, annotations and clipping.
The 3D Controls button has also been moved above the series window and the buttons in general have had some styling. We are planning some further changes to Clipping and looking at combining the dynamic clipping controls with the Clipping panel.
The eXport panel is where you can control the screen capturing and table exports. This tab can be “popped-out” to make it easier to export many tools at once. Please note the “eXport panel” training video has been updated on the General and FAQ page.
If you have updated your software to 5.4.0 or higher, you may have noticed some changes to the Survey Import tool. The Survey Setup training video has been updated on the General and FAQ page. The main changes have been to support additional properties from DXF files.
Text objects from DXF files can now be imported into mXrap and displayed in the 3D view. mXrap version 5.5.2 supports TEXT, ATEXT and MTEXT objects. Be aware that there are limits on the maximum number of texts that can be displayed and MTEXT (multi-line) objects will be converted to one line.
Below is an example of DXT text shown in the Monitoring application, in this case it can be handy for the control room operator to see where the exclusion areas are. You should start seeing more text series in 3D views, where you can adjust the size of the text and set to always face the 3D camera during rotation. This may include event magnitude labels, sensor ID’s etc.
For basic site surveys, you may want to separate your text files from your regular surfaces and lines content so you can turn them on/off independently. Otherwise when you tick a file from the list, all surface, line and text will be displayed together. You can control whether each survey displays its Surface, Line or Text content from the survey import tool.
The colours from DXF files can now be used in mXrap and this option will be enabled by default when you build the cache.
BE AWARE that the DXF colours may not suit the mXrap 3D view. Most mining software packages have a dark background and so DXF colours are often light. These colours may not work so well on the white background in mXrap. This is especially true for lines. White floor strings are quite common and will be invisible in mXrap. There are some auto-correct functions built into the DXF colours for this reason. You also still have the option to override the colour displayed as before.
mXrap supports the following survey formats to be used in 3D views:
DXF (AutoCAD .dxf)
DTM / STR (Surpac .dtm/.str)
INP (Map3D Geometry .inp files)
Regarding DXF files, this is a complicated format that AutoCAD often updates with new specifications. Our importer will always be behind the latest updates and therefore incompatible with loading in the very newest DXF formats. When exporting your survey files, you should have compatibility options for older formats. Look for ASCII DXF options R14 or 2000, these will work in mXrap, otherwise it needs a bit of trial and error initially. Binary DXF files are not supported. The other option is to use the ODA File Converter. It is free to download and use to convert DWG and DXF files into other formats.
When you are using the Frequency-Magnitude chart, it can be easy to forget it is log scale and this can distort a few things. Consider the chart below, have you ever thought the Gutenberg-Richter distribution doesn’t look right? Think it isn’t matching the large events very well?
The Gutenberg-Richter distribution is a statistical model of the data. Consider what the chart looks like in linear scale rather than log scale. The difference at the tail of the distribution (largest events) seems much less significant right? The other interesting point is the relative proportion of events above and below the Mmin. There is roughly only 20% of events in you database that are above the magnitude of completeness.
Obviously in linear scale, you can’t see what’s happening at the tail very well, that’s why we use the log scale in the first place :)
If you were wondering what a bored Xman does in their free time, this might give you an indication…..
You should know by now that mXrap is very flexible and apps can be developed to perform a very wide range of functions. If you were doubting that, this might change your mind :)
Ever played the puzzle game known as 2048? Well now there is an mXrap version!
The mXrap version has buttons for each of the four moves ( Up, Down, Left, Right ) which controls the game board in the 3D view.
If 2048 is not your cup of tea, how about a spot of Backgammon? This is two player and each player rolls the dice and selects where to go from the list of possible moves.
Both of these games were built using the Beta version of mXrap so we can’t send them to you just yet. Once the Beta version is stable we’ll upload the games to mXsync for you to play around with. It will be good motivation for you to learn mXsync :).
Some things you see in mXrap are properties of the software, while other things are properties of the root folder.
We often use the software Excel as an analogy. Excel has many built-in capabilities with endless possibilities for creating specific calculations. The software has powerful capabilities, but without a user constructing the spreadsheet, the power and value are not fully utilised. An Excel user can set up a spreadsheet which, with the required inputs, will provide you with results. This user can then provide you with that spreadsheet, which you can then use to perform the same calculations with other inputs.
mXrap is like the software Excel that provides the basic tools and the applications are like spreadsheets that can be used to perform specific tasks. Anybody with enough understanding of the software can build their own app which can be shared with others.
For example, when you make a chart in Excel, the “add chart function” is a property of Excel. What’s in the chart, what’s on each axis, what colour are the lines etc are properties of the spreadsheet.
mXrap is the same, there is an “Add Chart” function. Every chart in mXrap uses the same tool, but the application configures what’s actually displayed in the chart.
mXrap software level changes are things that affect the “Add Chart” function itself. For example the current mXrap charts only plot data on four axes; top, bottom, left and right. If we were to add more possible axes, like a secondary left axis, this would require a change to the software. It isn’t related to the root folder. Another example is the image capturing tool. This is a feature of every chart, 3D view and table at the software level.
If you want an updated Hazard Assessment application, this is like getting an updated spreadsheet. The root folder is essentially a library of data and applications, like a folder full of different spreadsheets and their associated data.
To summarise, if it seems like its a common feature across many areas in mXrap, its probably a property of the mXrap software. If it seems to be something related to a specific app or chart etc, it’s probably a setting in the root folder.
Updating the mXrap software is easy, just download the installer from the website.
Updating the root folder is what we use mXsync for and it’s actually more complicated to manage the root folder than the software. A bit like trying to manage a lot of interconnected spreadsheets. We normally rely on sites to request root updates. If you read about a feature on the blog or watch a training video that seems different to your current version. You probably need a root update. It’s a fairly quick process, we just need a brief connection with teamviewer / webex or goto meeting to perform the update. Contact us at firstname.lastname@example.org.
If you notice that you are not getting updated events in mXrap, there are a few possible explanations. In order to troubleshoot the problem, it is good to know exactly how your events are transferred from your seismic database, into mXrap.
The first thing to check is if the events are just being filtered out. In the events table, there is a “Show All Rows” option that will disable all filters and show you every event in the database. Sort by descending time and “Reload Data” and check the latest event time. Cross-check with your seismic processing software to confirm you are definitely missing events. Remember there is a short delay (~ 5 mins) from events being recorded to appearing in mXrap.
If showing all rows unveils your missing events, it’s a filtering issue. Look through the event filters to make sure everything is turned off. It might also be the quality filter. To see what quality settings are applied at your site, refer to the “Event Quality Settings” video here.
If you have confirmed the events are not updating, go to your root folder and open the #Events Import folder. Inside there will be an all_events and a recent_events evp file (exact folder, name, extension varies slightly between sites). These are the event files that are read by mXrap. Check the time-stamp of the recent_events file, it normally updates within 5 minutes of the latest event recorded. Try opening the file using notepad, it will be sorted by time so check what the latest event is.
If the EVP files in the root folder have the updated events, mXrap is not reading the files correctly. Contact email@example.com for assistance.
EVP files are generated by querying the seismic event database. The recent_events evp is normally updated every 5 minutes and contains the events from the start of the previous month, up to the present. The all_events evp is normally updated every 24 hours and contains all events up until the start of the current month. So there is always up to one month of overlapping events between the all_events and recent_events EVP’s. The EVP file only changes if there is something to change, i.e. the all_events EVP may be checked for updates every 24 hours, but the file may not have any changes for several days or weeks if none of the processing has changed.
If the evp files in the root folder have not been updated, the problem lies in how these files are copied into the root folder from your seismic database. This process varies by site, depending on whether you use IMS or ESG as your seismic service provider.
IMS generate the all_events and recent_events EVP files and normally store them on the seismic server share drive (often a Linux samba drive). Hitting “Reload Data” in mXrap will copy those EVP files (if they’ve changed) into the #Events Import folder in the root. You need to specify the location of the EVP files on the network. This setting is in the “Config Events Import” app, usually at the top of the app list (log in as Admin or Super User). In this app you should see the file paths to the all_events and recent_events files.
Make sure these file paths are correct and try navigating to the files in windows explorer. If you can’t access the files through windows explorer, you will need to ask your IT department for help to get access to the network location. There might also be a password required. If there is no password required, but your local machine is not able to access the files (particularly if you have recently updated Windows 10 or are using a new PC), then you may be experiencing this issue relating to accessing unauthenticated shared folders from Windows 10.
If you are able to access the EVP files on the seismic server, again, check the timestamps and open the recent_events file to see what the latest event is. If these files are up-to-date, it is a problem with the copy-action from the seismic server to the root. Double check the file path in mXrap is correct and try to “Reload Data” again. Contact firstname.lastname@example.org for assistance if mXrap events are still not updated.
If the EVP files on the seismic server are not updated, it is a problem with the IMS query and you will need to contact IMS for assistance.
We have a purpose built program for querying the ESG seismic database and dumping the EVP files into the root folder. The mXrap Export ESG program normally runs on the ESG computer. Check that this is still running. There should be an icon in the system tray. The ESG Exporter needs to know the location of the seismic database .mdb file and the location of the #Events Import folder in the root. Try manually running the query from the Exporter window and look for error messages and report to email@example.com. Common problems are not being able to write files in the root folder (user permissions), ESG Exporter is out-of-date, or the seismic database .mdb has moved or changed format.
The flowchart below summarises the troubleshooting process when your events are not updating in mXrap (PDF version).
There are many reasons you might want to store a short snippet of text associated with an event. There are two ways to do this in mXrap; event tags and event comments.
Event tags can be used to group events into categories. Example tags might be “suspected blast”, “damage occurred”, “suspect location”, “outlier” or “likely crusher noise”. These tags can be used in event filters to quickly show or hide particular categories.
Event comments are a second option to assign user text to events. Each event comment can be unique and about anything. They have no effect on event filters.
You can find videos on “Event tags” and “Event comments” at the training video page below. Both event tags and comments are shown in the main events table in General Analysis.
The event tags system has been modified recently. If your mXrap looks different to the video, you might need a root update. This process is now quick and easy with mXsync. We just need 5-10 minutes to connect via teamviewer / webex / gotomeeting.
In early versions of MS-RAP the “Omori” chart included the cumulative energy as a function of time after blasting. You won’t find that line anymore in the default Omori Analysis Tools application.
Although the total energy released has a value, the shape of the cumulative Energy graph inherently has no meaning. The accumulation of a logarithmic parameter is dominated by the largest events and results in a curve with a somewhat arbitrary, random shape. The total energy released is included in the blast table in the Omori Analysis Tools app but the cumulative energy line has no diagnostic value (in fact could be misleading) and does not represent the underlying stochastic process.
To illustrate further, the video below shows a repeated generation of synthetic seismic data where each sample has the same number of events, the same b-value, and the same Omori relationship.
You can capture or save any 3D view, chart or table using the “Clip” or “File” options at the top-right of the mXrap window. There are additional image capturing controls available to increase the quality and to adjust what is captured in the image. For capturing charts for example, you can turn on/off the header/footer, axis titles, axis ticks or grid lines. In table views, you can capture as an image or csv file, all rows or selected, with or without column headers. High quality images can be really handy for reports, papers or website publishing.
People often notice that the local magnitude of an event in mXrap is different from the magnitude shown in their other seismic software. This is usually because the local magnitude equations do not match.
Local magnitude is a calculated (derived) parameter. Normally its either based on Seismic Energy, Moment/Potency, or a combination of both. You should be able to find the relevant details for your site local magnitude in your seismic waveform processing software (eg WaveVis, Trace), otherwise contact your seismic service provider.
In mXrap, local magnitude can be imported along with the other event details from your seismic service provider although this does have low precision which can effect some of the charts and calculations. The reason for the “stepping” sometimes seen in the frequency-magnitude chart is due to an imported local magnitude of only 1dp precision.
The limited precision is why we usually recalculate the local magnitude from the source parameters according to the following equation.
ML = CE x log10(Energy) + CM x log10(Moment) + C
CE, CM and C are the input parameters required. The most common local magnitude scales are below.
CE = 0, CM = 2/3, C = -6 (Moment magnitude, Hanks Kanamori)
CE = 0.272, CM = 0.392, C = -4.63 (IMS scale)
ESG have a few other local magnitude options depending on whether they use uniaxial and/or triaxial sensors. These settings are in the ESG Events Import app. The other settings for local magnitude are in the General Setup app under the Magnitude tab. The “Local magnitude settings” video on this page runs through how to change these settings in mXrap.
You can use selections to filter events in General Analysis. This gives you a lot more freedom than being restricted to the traditional min/max range filters. Follow the steps below to see how you can use this feature to plot the Frequency-Magnitude chart for events occurring during periods of high apparent stress.
You can also check out this page to watch the “Selection boxes” and “How to use selections in the base filter” videos.
Step 1 – Create a new selection on the Apparent Stress Time History chart. Note that selections can be made in any 3D view, chart or table in a similar way.
Step 2 – To apply the selected events to the filter, go to the Events Ranges panel, hit “Copy selections to Base filter” and switch on the selection filter below. Now only the selected events will be used in the Frequency-Magnitude chart.
Step 3 – To turn off the blue selection icons, go to the Events series and turn off the “Highlight selected” option.
Note that if you adjust the selection box or make another selection, you need to hit the “Copy selections” button again to apply the changes. Use the switch to turn off the selection filter and return to original filter.
In the case of the Apparent Stress Time History chart, selection boxes are applied to the frequency line rather than the events in the background. The series that is active for selection can be modified. Look for the “Select” option in the series controls on the right-hand panel.
By now most of you have probably heard of mXsync. It has been installed at all the current sponsor sites and we did explain briefly as we went but there might still be some uncertainty about it.
mXsync is a piece of software installed separately to mXrap that facilitates the backup, restore and upgrading of the site root folder. We hope this new root folder management system will allow new and improved apps to flow more easily and more quickly to sites.
Right now we would like at least one person at each site to get to the point where they have mXsync setup on their computer and they are familiar with how to perform a backup. We’ve uploaded a video that gives a quick intro into mXsync and how to do a default backup. There are a few other videos on mXsync that show you some more operations that sites may need to do at some point.
There are a number of quality filters applied to the event database before they are displayed in mXrap. This does sometimes cause confusion because a particular event is visible in your other software, but not in mXrap.
The most common cause is the location filter but there may be other reasons. Have a look at the “Event quality settings” video at the page below. It goes through all the quality filters applied at your site and how to change them. https://mxrap.com/training-videos/event-quality/
We previously created a Basic Seismic Monitoring app but it didn’t get widely used. We’ve taken another swing at a new version released late in 2017.
The new app is intended for mine control room operators to monitor the latest seismicity and communicate event alerts and exclusion areas depending on site specific rules. Each site can setup their own event alert and exclusion settings in the Basic Seismic Monitoring (Admin) application and then the main application is a simplified interface for the viewer.
Key features include:
Automatic events updating (no need to keep pressing “Reload Data”!).
Popup event alert notifications (to alert user when window is hidden/minimised).
Popup system alert notifications (triggered from threshold time without new events).
Plot exclusion areas and isolate single mine areas (e.g. single level plans).
Automatic View – quick zoom/rotate to the exclusion areas on screen.
Distance measurement – get the distance from any survey point to the nearest event alert.
There are two videos uploaded at the page below; one to show you how to do the initial setup in the Admin window and another for the basic user in the main monitoring window. If your root looks different to the video, you might need a root update. The process is quick! Just contact support. https://mxrap.com/training-videos/monitoring-app/
We are also working on a few more features for the app and will hopefully release another update soon in 2018. Additions should include:
The distance to survey filter has been around for a while but now we have added a couple of new charts to further investigate the relationship between seismicity and your input surveys. The charts have been added to the General Analysis application, under the Charts menu, look for Distance to surveys.
The example below shows how you can compare your seismicity around your geological structures. Of course, sometimes a structure will have a lot of seismicity nearby because you happen to do a lot of blasting nearby. You can also plot blasts as a function of distance to survey!
If you can’t see the charts in your mXrap, you might need a quick root update, ping us an email at firstname.lastname@example.org and we’ll sort it out. Now we’ve setup everyone with mXsync, updates are very easy, will probably only take 10 minutes on teamviewer/webex :)
What do you think of the new charts? Any ideas for additions or improvements? Contact support and let us know!
If you want to learn more about mXrap, we don’t have a user manual to read. Who would want to read through one of those! We have been writing “cheat sheets” to show new users the main features of mXrap.
We’re still going to try and keep the cheat sheets updated, but we are going to focus more on our new training videos platform!
This is where we will post short tutorial videos to show and tell all the features of mXrap. Things like modifying clipping volumes, calculating hazards, using the gridding app will all be included. In time we will add more FAQ content and even some more advanced training material for those who want to play around with the guts of mXrap.