rklick-blog
rklick-blog
Rklick Solutions LLC
22 posts
Don't wanna be here? Send us removal request.
rklick-blog · 8 years ago
Text
Push Data To Power BI Operation
Push Data To Power BI Operation
By Push Datasets we Can Get Dataests, Create a Datasets
View On WordPress
0 notes
rklick-blog · 8 years ago
Text
Power BI Service Dataset Connect to Power BI Desktop
Power BI Service Dataset Connect to Power BI Desktop
Tumblr media
This Features Allow you to  connect datasets in the Power BI service to Power BI Desktop . This feature allows you to create new reports off existing datasets you’ve already published to the Power BI web
To get started with this feature, you’ll first need to enable the preview option in Power BI Desktop. Navigate to File > Options and settings > Options > Preview features and enable Power BI…
View On WordPress
0 notes
rklick-blog · 8 years ago
Text
New Navigation Features In Power BI
New Navigation Features In Power BI
Navigation Make easy to  access content quickly.  With Recent, get quick access to the dashboards and reports we’ve most recently accessed all of your workspaces.
It have new location for all dashboards that are shared with us  by others.
It  also appears in your ‘My workspace’ or Group Workspaces. The content shows up on the main area, with separate tabs for dashboards, reports, workbooks and…
View On WordPress
0 notes
rklick-blog · 8 years ago
Text
Push Datasets To Power BI Streaming Datasets By Microsoft Flow
Push Datasets To Power BI Streaming Datasets By Microsoft Flow
We Can use Power BI streaming datasets to easily build real-time dashboards by pushing data into the REST API endpoint, and having that data update in seconds on their streaming visuals
For Creating Streaming Datasets
First Of all Go To Power BI Web and click on Streaming Datasets option .
From there, we will create a dataset of type API
We Can Name the dataset as We want but necessary…
View On WordPress
0 notes
rklick-blog · 8 years ago
Text
Power Bi Report View Updates
Power Bi Report View Updates
Drop down slicer –  Power Bi adding adding  more types of slicers to reports. The drop down slicer.This Slicer is use full for when we have a lots of data to locate.
In Slicer Custom Visual We Can make it Drop down Box
We can See Data like below
  Hierarchical axis – Hierarchy With in this Visual, in a Drill Down Option of Visual We Can Get Detail View Of Hierarchical axis. And Better…
View On WordPress
0 notes
rklick-blog · 8 years ago
Text
Get Data From Mongo and Store it again to Mongo DB.
Get Data From Mongo and Store it again to Mongo DB.
Tumblr media
Get Data From Mongo and Store it again to Mongo DB.
Use Case:
Here we will create one database and collection in mongo db and get data of it in Spark’s dataframe after which we will again store it back to the mongodb collection.
Versions Used:
Scala-2.11.8 Spark—2.0.2
Steps:
1. First we need to create database in mongo db.If database already exist we can use it.To create a database write…
View On WordPress
0 notes
rklick-blog · 8 years ago
Text
Read Data From Kafka Stream and Store it in to MongoDB.
Read Data From Kafka Stream and Store it in to MongoDB.
Read Data From Kafka Stream and Store it in to MongoDB.
Use Case:
In this tutorial we will create a topic in Kafka and then using producer we will produce some Data in Json format which we will store to mongoDb. For example ,here we will pass colour and its hexadecimal code in Json in kafka and put it in the Mongodb table.
Version which we are using :
Kafka—0.10.0 Spark—2.0.2 Scala—2.11.8
Steps:
View On WordPress
0 notes
rklick-blog · 9 years ago
Text
Loading and Saving from different data source in Spark 2.0.2
Loading and Saving from different data source in Spark 2.0.2
In this blog we will discuss about Spark 2.0.2 . It demonstrates the basic functionality of Spark 2.0.2. We also describe  how to load and save data in Spark2.0.2. We have tried to cover basics of Spark 2.0.2 core functionality  like read and write data from different source (Csv,JSON,Txt) .
Loading and saving CSV file
As an example, the following creates a DataFrame based on the content of a CSV…
View On WordPress
0 notes
rklick-blog · 9 years ago
Text
Tutorial : Tutorial : Quick overview of Spark 2.0.1 Core Functionality
Tutorial : Tutorial : Quick overview of Spark 2.0.1 Core Functionality
Tumblr media
In this blog we will discuss about Spark 2.0.1 Core Functionality. It demonstrates the basic functionality of Spark 2.0.1. We also describe  SparkSession, Spark SQL and DataFrame API functionality. We have tried to cover basics of Spark 2.0.1 core functionality and SparkSession.
SparkSession:
SparkSession is new entry point of Spark.In previous version (1.6.x) of Spark ,Spark Context was entry…
View On WordPress
0 notes
rklick-blog · 9 years ago
Text
ElasticSearch Character Filter
View On WordPress
0 notes
rklick-blog · 9 years ago
Text
ElasticSearch Character Filter
In this post, I am going to explain, how ‘Elasticsearch Character Filter’ work. So there are following steps to done this. Step -1:  Set mapping for your index : Suppose our index name is ‘testindex’ and type is ‘testtype’. Now, we are going to set analyzer and filter.
curl -XPUT 'localhost:9200/testindex' -d ' { "settings": { "analysis": { "char_filter": { "quotes": { "type": "mapping",…
View On WordPress
0 notes
rklick-blog · 9 years ago
Text
Introduction to Spark 2.0
Introduction to Spark 2.0
Overview of Dataset , Dataframe and RDD API :
Resilient Distributed Datasets (RDD) is a fundamental data structure of Spark. It is an immutable distributed collection of objects. Each dataset in RDD is divided into logical partitions, which may be computed on different nodes of the cluster.
But due to facing issue related to advanced optimization move to dataframe.
Dataframebrought custom memory…
View On WordPress
0 notes
rklick-blog · 9 years ago
Text
Elasticsearch Graph capabilities
Tumblr media
Step 1 — Downloading and Installing Elasticsearch :
a) Download the elasticsearch using the following command  :
wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.3.2/elasticsearch-2.3.2.tar.gz
b) After downloading untar it ,using this command :
tar -xzf elasticsearch-2.3.2.tar.gz
c) go to elasticsearch directory
cd elasticsearch-2.3.2
View On WordPress
0 notes
rklick-blog · 9 years ago
Text
Hadoop on Multi Node Cluster
Hadoop on Multi Node Cluster
Step 1: Installing Java:
Java is the primary requirement to running hadoop on system, so make sure you have Java installed on your system using following command:
$ java -version
If you don’t have Java installed on your system, use one of following link to install it first.
Install Java 8 on Ubuntu
Install Java 8 on CentOS/RHEL
Step 2: Creating Hadoop User :
We recommend to create a normal (nor…
View On WordPress
0 notes
rklick-blog · 9 years ago
Text
Data ingestion from Google spreadsheet to Elasticsearch
Data ingestion from Google spreadsheet to Elasticsearch
In this blog we are elaborate how to ingest data from Google spreadsheet to Elasticsearch.
So, There are 5 steps to ingest data from Google spreadsheet to Elasticsearch. Please follow the below steps:
Step – 1)  Login to your account .
Step – 2) Open Spreadsheet and follow step.
Open the spreadsheet and click on Add one and type elasticsearch in search box.You would see below screen.
  Now click…
View On WordPress
0 notes
rklick-blog · 9 years ago
Text
Tutorial : DataFarme API Functionalities using Spark 1.6
Tutorial : DataFarme API Functionalities using Spark 1.6
In previous tutorial, we  have explained  about the SparkSQL and DataFrames Operations using Spark 1.6. Now In this tutorial we have covered  DataFrame API Functionalities . And we have provided running example of each functionality for better support. Lets begin the tutorial and discuss about the DataFarme API  Operations using Spark 1.6 .
DataFrame API Example Using Different types of…
View On WordPress
0 notes
rklick-blog · 9 years ago
Text
Tutorial : Spark SQL and DataFrames Operations using Spark 1.6
Tutorial : Spark SQL and DataFrames Operations using Spark 1.6
In previous tutorial, we  have explained about Spark Core and RDDs functionality. Now In this tutorial we have covered Spark SQL and DataFrame operation from different source file like JSON, Text and CSV. And we have provided running example of each functionality for better support. Lets begin the tutorial and discuss about the SparkSQL and DataFrames Operations using Spark 1.6
SparkSQL
Spark SQL…
View On WordPress
1 note · View note