#Variogram
Explore tagged Tumblr posts
Text
Enhancing ore grade estimation with artificial intelligence
Introduction
Accurate prediction of mineral grades is a fundamental step in mineral exploration and resource estimation, which plays a significant role in the economic evaluation of mining projects. Currently available methods are based either on geometrical approaches or geostatistical techniques that often considers the grade as a regionalised variable (Kaplan & Topal, 2020). Geostatistics has been widely used for qualitative estimation of ore deposits for many decades. However, ore quality does not vary uniformly in three dimensions which results in a poor quality estimation with the conventional geostatistical methods (Jain et al., 2022). Consequently, these limitations and complexities inspired researchers to investigate alternative approaches that can be utilised to overcome such obstacles. Over the past few decades, several researchers focused on various computational learning techniques that can predict grades more accurately without having to rely on an underlying assumption (Jang & Topal, 2014). The main aim of this article is to show the importance of using artificial intelligence to estimate the grades of a given deposit.
Key AI techniques in ore grade estimation
Artificial Neural Network (ANN)
Artificial neural networks (ANNs) are computer models that are designed to emulate human information processing capabilities such as knowledge processing, speech, prediction, and control. The ability of ANN systems to handle a large number of variables with complex relationships, spontaneously learn from examples, reason over inexact and fuzzy data, and to provide adequate and quick responses to new information has generated increasing acceptance of this technology in different engineering fields (Al-Alawi & Tawo, 1998). In order to limit estimation errors, systems based on neural networks, with their ability to process billions of data items in record time, make it possible to obtain results with a higher level of confidence than those that could be obtained using conventional methods such as ordinary kriging.
Support Vector Regression (SVR)
SVR is a regression problem extension of the support vector machine (SVM) classification model. To generalize the SVM to SVR, an insensitive zone known as the ε-tube is added around the function (Cortes & Vapnik, 1995). A non-linear kernel function is used to convert input features from the original space into a higher dimension. The problem becomes the construction of an optimal linear surface that matches the data in the feature space (Awad & Khanna, 2015). SVR has been applied to model the relationship between geological variables and ore grades, providing robust predictions even with limited data samples. Research comparing various machine learning models, including SVR, found them effective for ore grade estimation in different mineral deposits.
The artificial intelligence techniques described above are not the only ones used in the mining industry to estimate grades using artificial intelligence. There are several in the literature yet to be discovered.
Application and benefits
The use of artificial intelligence-based approaches for estimating content offers significant advantages.
Improved prediction accuracy
AI models can capture complex relationships between geological variables and ore grades, leading to more precise estimations. According to (Ismael et al., 2024), a study was carried out at El-Gezera region in El- Baharya Oasis, Western Desert of Egypt for forecasting the iron ore grade. a novel Artificial Neural Network (ANN) model, geo-statistical methods (Variograms and Ordinary kriging), and Triangulation Irregular Network (TIN) were employed and the presented ANN model estimates the iron ore as a function of the grades of Cl%, SiO2%, and MnO% with a correlation factor of 0.94. This final result demonstrates that, ANN is an excellent tool for grade estimation.
Cost reduction
By providing accurate grade estimations, AI helps in optimizing drilling and exploration activities, thereby reducing operational costs. The integration of AI in predicting ore grades significantly enhances the precision of assessments in mineral exploration.
Handling complex deposits
The classical method of estimation is less used in ore grade estimation than geostatistics (kriging) which proved to provide more accurate estimates by its ability to account for the geology of the deposit and assess error. AI techniques have been employed in diverse ore deposit types and have proven to provide comparable or better results than traditional methods, especially in complex structurally controlled vein deposits (Abuntori et al., 2021).
Conclusion
The application of artificial intelligence in predicting ore grades represents a significant advancement in the mining and minerals sector. By leveraging advanced algorithms and data analytics, AI technologies enhance the accuracy and efficiency of ore grade estimation, enabling more informed decision-making and optimized resource extraction.
For more information, visit Mining Doc
Original Source: https://bit.ly/4biS3ze
0 notes
Text
A Data Scientist Friendly Variogram Tutorial for Quantifying Spatial Continuity
#Technology #DataAnalytics #DataDriven https://towardsdatascience.com/a-data-scientist-friendly-variogram-tutorial-for-quantifying-spatial-continuity-1d2f29dcfb51?utm_source=dlvr.it&utm_medium=tumblr
0 notes
Link
Applied on a Synthetic Mining Dataset using open source GSLib and PythonContinue reading on Towards Data Science » #AI #ML #Automation
0 notes
Text
A Review on the Assessment of the Spatial Dependence
Authored by Pilar Garcia Soidan*
Abstract
For intrinsic random processes, an appropriate estimation of the variogram is required to derive accurate predictions, when proceeding through the kriging methodology. The resulting function must satisfy the conditionally negative definiteness condition, both to guarantee a solution for the kriging equation system and to derive a non-negative prediction error. Assessment of the resulting function is typically addressed through graphical tools, which are not necessarily conclusive, thus making it advisable to perform tests to check the adequateness of the fitted variogram.
Keywords : Intrinsic stationarity; Isotropy; Variogram
Introduction
When spatial data are collected, construction of a prediction map for the variable of interest, over the whole observation region, is typically addressed through the kriging techniques [1]. This methodology has been applied in a variety of areas (hydrology, forestry, air quality, etc.) and its practical implementation demands a previous estimation of the data correlation. The latter issue can be accomplished by approximating the variogram function [2], under the assumption that the underlying process is intrinsic, which is the least restrictive stationarity requirement.
However, estimation of the variogram is far from simple. It requires that the resulting function is valid for prediction, namely, that it fulfills the conditionally negative definiteness condition and, in practice, this problem is usually solved through a three-step procedure [3]. To start, a nonparametric method can be employed to obtain the empirical variogram or a kernel- type approach, among other options, although the functions derived in this way are not necessarily valid [4,5]. Then, in a second step, a valid parametric model is selected, so that the unknown parameters are estimated to best fit the data by any of the distinct criteria (maximum likelihood, least squares, etc.) provided in the statistics literature. Finally, the adequateness of the fitted variogram function should be checked, by using a cross-validation mechanism or goodness of fit tests. The former procedures are not always conclusive and their use is recommended for comparison of several valid models, rather than for assessment of a unique fit. Also, we could perform a test to determine the appropriateness of a variogram model, as the one introduced in Maglione & Diblasi [6], for application to random Gaussian and isotropic random processes, or a more general one suggested in Garcia-Soidan & Cotos-Yanez [7], which accounts for both the isotropic and the anisotropic scenarios.
An important shortcoming of this three-stage scheme is the choice of the parametric model. The most common options are based on the use of flexible functions, such as the Matern one, or on the selection of a model "by eye", by comparing the form of the nonparametric variogram with that derived for different valid families, typically used in practice. However this problem becomes more difficult when dealing with anisotropic variograms. Indeed, isotropy conveys that the data correlation depends only on the distance between the spatial sites and not on the direction of the lag vector, unlike the anisotropic assumption. This means that the assessment of isotropy could be a previous step, whose acceptance would simplify the selection of the model and the subsequent variogram computation. In practice, the isotropic property is typically checked through graphical methods, by plotting a nonparametric estimator in several directions, although the latter procedures are not always determinant. Formal approaches to test for isotropy have been introduced in Guan et al. [8] or in Maity & Sherman [9]. The first test was designed for its application to strictly stationary random processes, whereas the latter one works for more general settings.
Conclusion
The need to obtain an adequate variogram estimator demands a deep exploration of the available data. Firstly the isotropic condition should be checked, as this condition would simplify the characterization of the dependence structure. The graphical diagnosis for assessment of this assumption should be accompanied by the performance of some test to determine its acceptance. Then, a nonparametric estimator can be computed and used to derive a valid parametric fit, whose appropriateness can also be evaluated through any of the goodness of fit tests proposed.
To Know More About Biostatistics and Biometrics Open Access Journal Please Click on: https://juniperpublishers.com/bboaj/index.php
To Know More About Open Access Journals Publishers Please Click on: Juniper Publishers
#Juniper Publishers#open access journals#biostatistics#biometrics#kriging methodology#Variogram#spatial data
0 notes
Video
instagram
View-rose analysis for a dedicated route in #London, UK, crossing #thamesriver . Rather than viewing only one angle at a time, users can view semi-variogram values in all direction at once form a particular point. The video highlights the visual obstructions and the visibility areas while commuting. . VIEW ANGLE: 360 degrees PATH IDENTIFICATION : via space syntax DISTANCE : 4.7 kms . Project by: @piyushprajapati . . ▪️Follow | @urbandesign.lab | ▪️Get featured | @urbandesign.lab |▪️DM us | @urbandesign.lab | . . #urbandesign #actofmapping #mapping #cartography #mappingthecity #portfolio #spacesyntax #landscape_design #map #maps #mapa #mapping #carta #carte #cartography #cartografia #territory #arquitectura #arquitetura #architettura #design #terrain #urbandesignlab #showitbetter #itscritday #pimpmyplan #structureplan #landarch #koozarch #computation (at Thames Path) https://www.instagram.com/p/CERqi1up4Lh/?igshid=1vf207ktnbxlq
#london#thamesriver#urbandesign#actofmapping#mapping#cartography#mappingthecity#portfolio#spacesyntax#landscape_design#map#maps#mapa#carta#carte#cartografia#territory#arquitectura#arquitetura#architettura#design#terrain#urbandesignlab#showitbetter#itscritday#pimpmyplan#structureplan#landarch#koozarch#computation
1 note
·
View note
Text
4.17
I’m having quite a few issues tonight. I’ve spent some time the last couple of days trying to work on this week’s list of tasks and I’ve been hitting issues with most of them. I don’t have the water data, but I’ve been working through Zixian’s code and trying to apply it to the lead level data. I’m a little confused as to what exactly NEZipcode is, and about all of the things that are being done to it. That’s also where I start to have coding problems; when I try to set NEZipcode.df, I get “trying to get slot "data" from an object (class "sf") that is not an S4 object”, and I’m not sure what that means. I’m also having issues loading the pycno library. For these reasons I’m having trouble getting the code to work on my computer and extending it to BLL data, and I’m hoping to ask some questions about the code tomorrow to clear up other confusions as I’m having trouble following pieces of it. (I’m sure it’s a great code, I’m just not super great at this!!!!) I’m also having issues with the variogram stuff; it’s telling me that “ALAND10” is not found, but it’s definitely a value in the zipcode chart, so I’m not sure why that would be the case.
0 notes
Quote
...it’s not the listener’s fault if they miss out on something that will change their lives – these days, anyone can gain access to a library of over 15 million songs on demand for free.
Brian Whitman’s How music recommendation works — and doesn’t work piece on Variogram, his personal blog
0 notes
Text
A Data Scientist Friendly Variogram Tutorial for Quantifying Spatial Continuity
#AI #ML #Tech https://towardsdatascience.com/a-data-scientist-friendly-variogram-tutorial-for-quantifying-spatial-continuity-1d2f29dcfb51?utm_source=dlvr.it&utm_medium=tumblr
0 notes
Link
Abstract: Using a modified geostatistical technique, empirical variograms were constructed from the first derivative of several diverse remote sensing reflectance and phytoplankton absorbance spectra to describe how data points are correlated with distance across the spectra. The maximum rate of information gain is measured as a function of the kurtosis associated with the Gaussian structure of the output, and is determined for discrete segments of spectra obtained from a variety of water types (turbid... from New NASA STI https://go.nasa.gov/2vWh9lT
0 notes
Text
-Adaptive Refinement Based on Stress Recovery Technique Considering Ordinary Kriging Interpolation in L-Shaped Domain
The primary objectives of this study are twofold. Firstly, the original SPR method of stress recovery has been modified by incorporating the kriging interpolation technique to fit a polynomial to the derivatives recovered at the Gauss points. For this purpose, the -version of finite element analysis is performed to produce the stresses at the fixed Gauss points where the integrals of Legendre polynomials are used as a basis function. In contrast to the conventional least square method for stress recovery, the weight factor is determined by experimental and theoretical variograms for interpolation of stress data, unlike the conventional interpolation methods that use an equal weight factor. Secondly, an adaptive procedure for hierarchical -refinement in conjunction with a posteriori error based on the modified SPR (superconvergent patch recovery) method is proposed. Thirdly, a new error estimator based on the limit value approach is proposed by predicting the exact strain energy to verify the kriging-based SPR method. The validity of the proposed approach has been tested by analyzing two-dimensional plates with a rectangular cutout in the presence of stress singularity. from # All Medicine by Alexandros G. Sfakianakis via alkiviadis.1961 on Inoreader http://ift.tt/2his9UX
from OtoRhinoLaryngology - Alexandros G. Sfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2tZxl5W
0 notes
Text
GEOSTATISTIK : APPLIED MINING GEOSTATISTICS
GEOSTATISTIK : APPLIED MINING GEOSTATISTICS
DESKRIPSI TRAINING GEOSTATISTIK
Geo-statistik yang dikembangkan oleh Prof George Matheron pada tahun 1960, telah banyak diterapkan dalam industri pertambangan. Geostatistikdigunakan untuk melakukan evaluasi sumber daya mineral dan melakukan pemodelan. Metode geo-statistik mempertimbangkan tata ruang yang diwakili oleh Model variogram dalam metode estimasi seperti poligonal, pengelompokan…
View On WordPress
#GEOSTATISTIK : APPLIED MINING GEOSTATISTICS#info jadwal training#Info pelatihan#info seminar#Info training#Info training jakarta#informasi jadwal seminar#informasi jadwal training#informasi jadwal training 2017#informasi jadwal training 2018#informasi seminar#Informasi training#informasi training 2017#informasi training 2018#jadwal pelatihan dijogja#jadwal training#jadwal training 2017#jadwal training 2018#jadwal training 2019#jadwal training 2020#jadwal training 2021#jadwal training bulan agustus 2017#jadwal training bulan agustus 2018#jadwal training bulan agustus 2019#jadwal training bulan agustus 2020#jadwal training bulan agustus 2021#jadwal training bulan april 2017#jadwal training bulan april 2018#jadwal training bulan april 2019#jadwal training bulan april 2020
0 notes
Photo
Esto es todo lo que ocupa mi mente :(
1 note
·
View note
Text
A Data Scientist Friendly Variogram Tutorial for Quantifying Spatial Continuity
https://towardsdatascience.com/a-data-scientist-friendly-variogram-tutorial-for-quantifying-spatial-continuity-1d2f29dcfb51?utm_source=dlvr.it&utm_medium=tumblr
0 notes
Text
If you did not already know
Progressively Growing Generative Autoencoder (PIONEER,Pioneer Network) We introduce a novel generative autoencoder network model that learns to encode and reconstruct images with high quality and resolution, and supports smooth random sampling from the latent space of the encoder. Generative adversarial networks (GANs) are known for their ability to simulate random high-quality images, but they cannot reconstruct existing images. Previous works have attempted to extend GANs to support such inference but, so far, have not delivered satisfactory high-quality results. Instead, we propose the Progressively Growing Generative Autoencoder (PIONEER) network which achieves high-quality reconstruction with $128{\times}128$ images without requiring a GAN discriminator. We merge recent techniques for progressively building up the parts of the network with the recently introduced adversarial encoder-generator network. The ability to reconstruct input images is crucial in many real-world applications, and allows for precise intelligent manipulation of existing images. We show promising results in image synthesis and inference, with state-of-the-art results in CelebA inference tasks. … Computational Productive Laziness (CPL) In artificial intelligence (AI) mediated workforce management systems (e.g., crowdsourcing), long-term success depends on workers accomplishing tasks productively and resting well. This dual objective can be summarized by the concept of productive laziness. Existing scheduling approaches mostly focus on efficiency but overlook worker wellbeing through proper rest. In order to enable workforce management systems to follow the IEEE Ethically Aligned Design guidelines to prioritize worker wellbeing, we propose a distributed Computational Productive Laziness (CPL) approach in this paper. It intelligently recommends personalized work-rest schedules based on local data concerning a worker’s capabilities and situational factors to incorporate opportunistic resting and achieve superlinear collective productivity without the need for explicit coordination messages. Extensive experiments based on a real-world dataset of over 5,000 workers demonstrate that CPL enables workers to spend 70% of the effort to complete 90% of the tasks on average, providing more ethically aligned scheduling than existing approaches. … Graph Variogram Irregularly sampling a spatially stationary random field does not yield a graph stationary signal in general. Based on this observation, we build a definition of graph stationarity based on intrinsic stationarity, a less restrictive definition of classical stationarity. We introduce the concept of graph variogram, a novel tool for measuring spatial intrinsic stationarity at local and global scales for irregularly sampled signals by selecting subgraphs of local neighborhoods. Graph variograms are extensions of variograms used for signals defined on continuous Euclidean space. Our experiments with intrinsically stationary signals sampled on a graph, demonstrate that graph variograms yield estimates with small bias of true theoretical models, while being robust to sampling variation of the space. … OSEMN Process (OSEMN) We’ve variously heard it said that data science requires some command-line fu for data procurement and preprocessing, or that one needs to know some machine learning or stats, or that one should know how to `look at data’. All of these are partially true, so we thought it would be useful to propose one possible taxonomy – we call it the Snice* taxonomy – of what a data scientist does, in roughly chronological order: · Obtain · Scrub · Explore · Model · iNterpret (or, if you like, OSEMN, which rhymes with possum). Using the OSEMN Process to Work Through a Data Problem … http://bit.ly/2Zl4t2T
0 notes