#fielddata
Explore tagged Tumblr posts
noticiasorganicas · 2 months ago
Text
Drones con inteligencia artificial, bioinsumos sostenibles y apps como FieldData transforman el paisaje productivo. Agricultores gestionan ganado y cultivos con precisión, optimizando recursos y aumentando la calidad ambiental.
Tumblr media
0 notes
thescoutfdc · 4 years ago
Photo
Tumblr media
Oil and gas companies are increasingly confronted with the need to address Environmental, Social, and Governance (“ESG”) imperatives in their businesses.
Here is the best way to get started on an oil and gas ESG strategy.
Connect with us at: http://www.scoutfdc.com/contact-us/
1 note · View note
qnopy · 8 years ago
Text
Data-Collection App Targets Water and Environmental Sector -by Jeff Rubenstone, ENR
Tumblr media
Read the full article originally published on ENR
Engineers and field technicians still rely on pen and paper to capture readings (in the field). But an app designed by a former environmental engineer takes a different tack.
“It’s a very data-driven area of construction,” says Saurabh Gogate, founder and CEO of tech start-up QNOPY. “Whether it’s new construction, remediation or water-quality monitoring, the work is measured by the quality of the data and how fast you can analyze it.”
The QNOPY mobile app has custom-designed forms suited to the workflow of the engineer.
“The forms are fairly straightforward—it makes the data collection more efficient,” says Derek M. Wurst, condition assessment manager with Black & Veatch’s water division.........
Read the full article originally published on ENR
0 notes
webcreek-blog · 8 years ago
Photo
Tumblr media
Our star oilfield product OFS PRO is every O&G business must-have -- check out these system features! Get your FREE trial here: http://ofspro.com/get-started-now/
0 notes
globalmediacampaign · 5 years ago
Text
Configuring and authoring Kibana dashboards
Kibana is an open-source data visualization and exploration tool. It can be used for log and time-series analytics, application monitoring, and operational intelligence use cases. Kibana also offers powerful, easy-to-use features such as histograms, line graphs, pie charts, heat maps, and built-in geospatial support. Kibana is tightly integrated with Amazon Elasticsearch Service (Amazon ES), a search and analytics engine, to simplify the analysis of large volumes of data. With its simple, browser-based interface, Amazon ES enables you to create and share dynamic dashboards quickly. This post demonstrates how to create visualizations and a dashboard. You will use Amazon Relational Database Service (RDS) and Amazon Aurora PostgreSQL logs stored in Amazon ES using Kibana. This post is part two of a two-part series. For part one, refer to Analyze PostgreSQL logs with Amazon Elasticsearch Service. The following are the high-level steps: Create index patterns Aggregate text field Explore discovery feature Explore visualization feature Create visualizations Create Dashboards Prerequisites A valid AWS account with access to the appropriate AWS services. An Aurora/RDS PostgreSQL database. For more information, see Amazon RDS. Confirm that the database logs are generated with the required content. Check the parameter group and make sure that the following parameters have the correct value: log_lock_waits=1 (true) log_min_duration_statement=5000 (log queries that take more than 5 secs) log_min_messages=warning log_min_error_statement=error log_statement=ddl log_checkpoints=1 (true) log_connections=1 (true) log_disconnections=1 (true) Creating index patterns To visualize and explore data in Kibana, you must first create index patterns. An index pattern points Kibana to the Amazon ES indexes containing the data that you want to explore. For example, to explore the data from a particular database in August 2019, create the index pattern: cwl--2019.08*. While setting up Amazon CloudWatch to Amazon ES streaming, configure the indexes in the format: cwl--yyyy.MM.dd. To create index patterns, complete the following steps: Open the Kibana application using the URL from Amazon ES Domain Overview page. On the navigation panel, choose the gear icon to open the Management page. Choose Index Patterns. Choose Create index pattern. For Index pattern, enter cwl with an asterisk wild card (cwl-*) as your default index pattern. For Time filter, choose @timestamp. Choose Create index pattern. This index pattern is set as default automatically. If not, choose the star icon, as shown in the screenshot preview below. Repeat these steps to create index patterns for each database(cwl--*). This is a best practice to improve query performance. Aggregating on a text field To see when an error was logged at a particular period or which queries were logged at a particular time, allow aggregation on the message. Aggregations on a text field are not provided by default, so modify that setting. For more information, see Getting Started with Amazon Elasticsearch: Filter Aggregations in Kibana. This setting could consume more memory based on the size of the data. To make the message field aggregatable, complete the following steps: On the navigation pane, choose Dev Tools. Enter the following command: PUT cwl-*/_mapping/?include_type_name=false { "properties": { "message": { "type": "text", "fielddata": true } } } Now, set the correct shard value. To allow one shard per index, use the following code: PUT /_template/shards_per_index { "template": "cwl-*", "settings": { "index" : { "number_of_shards": 1, "number_of_replicas": 1 } } }  For more information, see Get Started with Amazon Elasticsearch Service: How Many Shards Do I Need? Explore the discovery feature You can explore your data with Kibana’s data discovery functions. You now have access to every event from all the databases. Use the index pattern cwl-*. It is marked as default while creating the index patterns. You can view data by submitting search queries and filter results using Lucene query syntax. You can also see the number of documents that match the search query and get field value statistics. To perform improvised searches across multiple logs and view from multiple databases, use index pattern cwl-*. To get data from a single database, choose the correct index pattern instead of cwl-*. Find this by searching the database name in the dropdown menu in the top-left corner of the Discover pane. This helps to boost the query’s performance by invoking only the indexes of that particular database. The following image shows the populated data. The Selected Fields menu shows the fields you configured while setting up the CloudWatch-to-Amazon ES log streaming (date, time, misc, message). The number of hits is displayed on the top-left corner of the page. To display the select messages and omit the queries that are not a copy SQL command, use the Lucene query syntax: AND message:"select" -message:"copy" The search commands are as straightforward as performing an online search. For example, you could also use AND select –copy without using the context field (message). Explore the visualization feature To create visualizations based on the data in your Amazon ES indexes, use the visualization function. Kibana visualizations are based on Amazon ES queries. You can create charts that show trends, spikes, and dips by using a series of Amazon ES aggregations to extract and process data.  Create visualizations from a search saved from the discovery function or start with a new search query. To create a new visualization, complete the following steps: On the navigation pane, choose Visualization. Choose the add icon as shown in the screenshot preview below, represented by a green square with a + sign. The following screenshot shows the various visualization types to choose. Creating visualizations for the RDS PostgreSQL DB logs live dashboard This post creates the following visualizations and adds them to a dashboard: A visual builder timeline graph of long-running queries. A data table of the locks data. A pie chart of the percentage of error, fatal, and warning messages. A data table of the error, fatal, and warning messages aggregated by time. A data table of the long-running queries aggregated by time. A visual builder bar graph of the checkpoints. A visual builder line graph of the connections and disconnections. A data table of the DDL statements aggregated by time. This is not an exhaustive list of visualizations, but helps you understand how you can use each of them. Timeline graph of long-running queries To create a long-running queries timeline graph, complete the following steps: On the Visualizations pane, choose the add icon (represented by a + sign with a circle around it). Choose Visual Builder. For each filter (SELECT, INSERT, UPDATE, and DELETE), choose the add icon. For Group by, choose Filter. For Query string, enter the same string as the filter type (select, insert, update, or delete). The following screenshot demonstrates steps 3–5. Choose the Panel options tab. For Index pattern, enter the appropriate index pattern (cwl-*). Choose No under Drop last bucket? For Panel filter, enter the database name. The following screenshot details steps 6–9. Choose Save to save the visualization with an appropriate title (for example, “ Long-Running Queries Graph”). The output of the visualization is displayed as shown in the image below. Data table of locks data To create a data table of locks data, complete the following steps: On the Discover page, select the appropriate index pattern (cwl-*). Add message as the selected field from the list of Available fields by hovering on the message and choosing the Add button. The screenshot below previews the message field selected. For Filters, enter the Lucene query: lock OR exclusivelock OR accesssharelock, as shown in the screenshot below. Choose the Refresh Choose Save to save the search with an appropriate title (for example, Locks Info). The output of the visualization is displayed as shown in the image below. Pie chart of error, fatal, and warning messages To create an error, fatal, and warning messages pie chart, complete the following steps: On the Visualizations pane, choose the add icon. Choose Pie chart. From the Index pattern list, choose the appropriate index pattern (cwl-*). Choose Split Slices under Buckets category. For Aggregation, choose Filters. Add the following filters and choose Add Filter for each: "" AND "error" "" AND "fatal" "" AND "warning" The following screenshot shows the filters added under the Buckets section. Choose Save to save the visualization with an appropriate title (for example, Error, Fatal, and Warning Chart). The output of the visualization is displayed as shown in the image below. Data table of the error, fatal, and warning messages To create an error, fatal, and warning messages data table aggregated by time, complete the following steps: On the Visualizations pane, choose the add icon. Choose Data Table. From the Index pattern list, choose the appropriate index pattern (cwl-*). Choose Add metrics. Under Metric, for Aggregation, choose Top Hit. For Field, choose message. The following screenshot preview details steps 4–6. Under Buckets, choose Split Rows. For Aggregation, choose Date Histogram as shown in the screenshot preview below. Choose the play icon as shown below. For Filters, add the query string in Lucene syntax “” AND (“error” OR “fatal” OR “warning”), as shown in the screenshot preview below. Choose the Refresh button. Choose Save to save the visualization with an appropriate title (for example, Error, Fatal, and Warning Messages). The output of the visualization is displayed as shown in the image below. Data table of long-running queries To create a long-running queries data table aggregated by time, follow the procedure as with creating the data table for messages, with the following changes: Replace the query string with AND ((“duration” AND “insert”) OR (“duration” AND “select”) OR (“duration” AND “delete”) OR (“duration” AND “update”)). Choose Save to save the visualization with an appropriate title (for example, Long-Running Queries). The output of the visualization is displayed as shown in the image below. Graph of checkpoints To create a checkpoints graph, complete the following steps: On the Visualizations pane, choose the add icon. Choose Visual Builder. For Group by, choose Filters. In the Filters fields, enter “checkpoint complete” and “Checkpoints”, as shown in the screenshot preview below. Choose the Options tab. Add the appropriate index pattern (cwl-*) in the Index pattern text box. Choose No under Drop last bucket? For Panel filter, enter the database name. For Chart type, choose Bar. Choose Save to save the visualization with an appropriate title (for example, Checkpoint Graph).  The output of the visualization is displayed as shown in the image below. Graph of connections and disconnections To create a graph of connections and disconnections, complete the following steps: On the Visualizations pane, choose the add icon. Choose Visual builder. For each filter (Connection and Disconnection), choose the add icon. For Group by, choose Filter. For Query string, enter either “connection authorized” or disconnection, depending on the filter title. The following screenshot preview details these steps. Choose the Options tab. Add the appropriate index pattern (cwl-*) in the Index pattern text box. Choose No under Drop last bucket? For Panel filter, enter the database name. Choose Save to save the visualization with an appropriate name (for example, Connections and Disconnections Graph). The output of the visualization is displayed as shown in the image below. Data table of DDL statements To create a data table of DDL statements aggregated by time, follow the procedure for creating the data table for messages, with the following changes: Replace the query string with “” AND (alter OR drop OR create OR copy OR grant). Choose Save to save the visualization with an appropriate name (for example, “ DDL Statements). The output of the visualization is displayed as shown in the image below. Dashboard The dashboard displays a collection of visualizations and searches. You can arrange, resize, and edit dashboard content. You can also save and share the dashboard. To configure the dashboard, complete the following steps: From the navigation pane, choose Dashboard. Choose the add icon. Choose each visualization in the list to add it to the dashboard panel. To add the saved visualizations from your discovery search, use the Saved Search tab, as shown in the screenshot below. The visualizations and searches in a dashboard are stored in panels that you can move, resize, edit, and delete. To start editing, click Edit in the menu bar. – To move a panel, click and hold the panel header and drag to the new location. – To resize a panel, click the resize control on the lower right and drag to the new dimensions. Adjust the panels as required. Choose Save to save the dashboard with the database name as the title. Choose Store time with dashboard and Confirm Save. The following screenshot previews the entire dashboard page with each created visualization. Summary This post demonstrated how visualizations help you understand data logs efficiently and how the Discover feature lets you look at the formatted data stored in Amazon ES. Furthermore, it showed how dashboards help you look at various visualizations on a single screen. With this understanding of how to create visualizations and dashboards from the PostgreSQL logs data, you can easily try out other charts. This post is part two of a two-part series. For part one, refer to Analyze PostgreSQL logs with Amazon Elasticsearch Service. AWS welcomes feedback, so please leave your comments and questions below.   About the Author   Marcel George is a Consultant with Amazon Web Services. He works with customers to build scalable, highly available, and secure solutions in AWS cloud. His focus area is homogenous and heterogeneous migrations of on-premise databases to Amazon RDS and Aurora PostgreSQL.   https://probdm.com/site/MTE0NDI
0 notes
sanaskal-blog · 6 years ago
Text
0 notes
awsexchage · 8 years ago
Photo
Tumblr media
ゴールデンウィークスペシャル:小ネタ道場一本勝負 〜 川原洋平探検隊が Elasticsearch の奥地に住まう猫に遭遇する! 〜 http://ift.tt/2qUAwHw
この記事は
cat API と何か!?(効果音:ジャーン)
本当に猫が住んでいるのか!?(効果音:ジャーン) — 試した環境 — にゃー! — そして、とうとう、川原洋平探検隊は…
以上
この記事は
川口浩探検隊へのオマージュです。
Amazon.co.jp | 水曜スペシャル 川口浩 探検シリーズ ~未確認生物編~ DVD-BOX (初回限定版) DVD・ブルーレイ - 川口浩
http://ift.tt/2q9S05o
Amazon.co.jp | 水曜スペシャル 川口浩 探検シリーズ ~未確認生物編~ DVD-BOX (初回限定版) DVD・ブルーレイ - 川口浩
 www.amazon.co.jp 
www.amazon.co.jp
cat API と何か!?(効果音:ジャーン)
cat APIs | Elasticsearch Reference [5.4] | Elastic
http://ift.tt/1JBzRxQ
 www.elastic.co 
www.elastic.co
cat API は Elasticsaerch のノードやインデックスの各種情報を取得する事が出来る便利 API で、Elasticsearch のトラブルシューティングやリソースの検討等に有用である!(効果音:ジャーン)ちなみに、個人的には以下のように各インデックスのシャード数やドキュメント数を取得出来るが嬉しい!(効果音:ジャーン)
ubuntu@ubuntu-xenial:~$ curl localhost:9200/_cat/indices?v health status index uuid pri rep docs.count docs.deleted store.size pri.store.size yellow open twitter VSZ8zCTBS86Awm6n3pmckw 5 1 2 0 9.1kb 9.1kb
果たして Elasticsearch に猫が住んでいるのだろうか…(効果音:ジャーン) 尚、cat API の詳細についてはこちらのブログで紹介されている!(効果音:ジャーン)
本当に猫が住んでいるのか!?(効果音:ジャーン)
試した環境
ubuntu@ubuntu-xenial:~$ curl localhost:9200/ { "name" : "u0y-s7V", "cluster_name" : "elasticsearch", "cluster_uuid" : "p20emC22SyG96ziu0qwl5w", "version" : { "number" : "5.3.2", "build_hash" : "3068195", "build_date" : "2017-04-24T16:15:59.481Z", "build_snapshot" : false, "lucene_version" : "6.4.2" }, "tagline" : "You Know, for Search" }
にゃー!
川原洋平探検隊は遂に見た!(効果音:ジャーン)
ubuntu@ubuntu-xenial:~$ curl localhost:9200/_cat =^.^= /_cat/allocation /_cat/shards /_cat/shards/{index} /_cat/master /_cat/nodes /_cat/tasks /_cat/indices /_cat/indices/{index} /_cat/segments /_cat/segments/{index} /_cat/count /_cat/count/{index} /_cat/recovery /_cat/recovery/{index} /_cat/health /_cat/pending_tasks /_cat/aliases /_cat/aliases/{alias} /_cat/thread_pool /_cat/thread_pool/{thread_pools} /_cat/plugins /_cat/fielddata /_cat/fielddata/{fields} /_cat/nodeattrs /_cat/repositories /_cat/snapshots/{repository} /_cat/templates
ん…
cat だけに…猫が住んでいたのであった!!(効果音:ジャーン)
そして、とうとう、川原洋平探検隊は…
ソースコードの中に猫を発見したのであった!(効果音:ジャーン)
elastic/elasticsearch
http://ift.tt/2q9Aka6
elasticsearch - Open Source, Distributed, RESTful Search Engine
 GitHub 
github.com
以上
川原洋平探検隊が Elasticsearch の奥地に住んでいる猫を発見したメモでした。
元記事はこちら
「ゴールデンウィークスペシャル:小ネタ道場一本勝負 〜 川原洋平探検隊が Elasticsearch の奥地に住まう猫に遭遇する! 〜」
May 24, 2017 at 02:00PM
0 notes
repsly-blog · 9 years ago
Text
Field Data Insight: Distribution of Activities by Activity Type in Different Industries
Last week, we looked into regional differences in the distribution of activities conducted by field reps around the world. This week, we will explore the differences in activity distribution between 5 different industries - CPG food & beverage, CPG outside of food like clothing/furniture, field services, medical/pharma, and other industries including property management and field sales & marketing. Forms of activities include of filling out mobile forms, taking orders, audits or inspections, recording client notes and taking photos.
To read more, click here!
1 note · View note
repsly-blog · 9 years ago
Text
Field Data Insight: Distribution of Activities by Activity Type
This week, we explored different types of activities conducted in the Repsly Mobile CRM by field representatives. Activities consist of completing mobile forms, creating orders, taking audits or inspections, recording notes or taking photos on a mobile form.
To read more, click here!
0 notes
repsly-blog · 10 years ago
Text
Field Data Insight: Average Number of Minutes Per Visit by Industry
This week, we investigated the number of minutes an average field rep spent per visit. The data was extracted across five different industry categories: consumer packaged goods - CPG in food and beverage, other CPG outside of food such as apparel and electronics, field services, medical and pharmaceuticals, and other industries such as property management and field sales & marketing.
To read more, click here!
0 notes