#uniqueidentifier
Explore tagged Tumblr posts
thedbahub · 1 year ago
Text
Optimizing a Large SQL Server Table with a Better Primary Key
Introduction Inheriting a large SQL Server table with suboptimal indexing can be a daunting task, especially when dealing with a 10 TB table. In this article, we’ll explore a real-world scenario where a table uses a uniqueidentifier as its clustered primary key and has an unnecessary identity int column. We’ll discuss the steps to efficiently optimize this table by replacing the clustered…
View On WordPress
0 notes
seotoolsreport · 8 months ago
Text
🔑 Create Unique Identifiers with the UUID Generator
Need a fast way to generate universally unique identifiers (UUIDs)? Use our UUID Generator to instantly create secure, random UUIDs for your projects, applications, or databases. Perfect for developers and system admins!
Generate your UUID now https://seotools.report/tools/uuid-generator
0 notes
dailyfont-com · 11 months ago
Photo
Tumblr media
Ultra-condensed industrial sans serif font with geometric simplicity, perfect for bold branding, titling, and headlines that demand attention.
Link: https://l.dailyfont.com/uYJRa
1 note · View note
info-comp · 5 years ago
Link
Из данного материала Вы узнаете, что такое GUID и как работать с этим GUID в Microsoft SQL Server
0 notes
bassaminfotech · 2 years ago
Link
How to Create Sequence Numbers in Odoo 16  
1 note · View note
dondiesel · 8 years ago
Photo
Tumblr media
More Receipts 🚤🚤🚤 #FYD #ImCampaigning💪💯 #NoCompEP😡 #indie #hiphop #music #barcodes #UPC #uniqueidentifier #bmi #ascap #cdbaby #royalties #publishing
0 notes
babyawacs · 3 years ago
Text
woa!!! @debian .@debian @planetdebian the #mount points are cleary a key vul nerability in @linux  ie mountable over volume, hardwarelink, uniqueidentifier and whatnot. with whocan- frivolously, and to it with godknows which repositor ies per filesystem andor encryptionsystem dependencies.  w  o a ****************** n o w o n d e r the pro s setup individual volumes/partitions for each quirky folder and bitterly declina te the mountpoints permissions ofeach mountable partition/volume and not only t he acl permissions  ****************** godknowswhat it does even with the ma king of scripts overriding at anypoint what youwant,  the hideous paths of NOT sth not only allow sth not nopermission and itis an opensystem letalone from personnel likely eachintel has a few finger inthe underwear of some regrettably ******** why dont you give anoption hardened security install withallhar denings and optimised mountpoint quirks and noscrips addable fixed and  no h ideous nopaths ******** greetings I am Christian KISS BabyAWACS – Raw I ndependent Sophistication #THINKTANK + #INTEL #HELLHOLE #BLOG https://www.BabyA WACS.com/ [email protected] PHONE / FAX +493212 611 34 64 Helpful? Pay. Su pport. Donnate. paypal.me/ChristianKiss
woa!!! @debian .@debian @planetdebian the #mount points are cleary a key vul nerability in @linux  ie mountable over volume, hardwarelink, uniqueidentifier and whatnot. with whocan- frivolously, and to it with godknows which repositor ies per filesystem andor encryptionsystem dependencies.  w  o a ****************** n o w o n d e r the pro s setup individual volumes/partitions for each quirky folder and bitterly declina te the mountpoints permissions ofeach mountable partition/volume and not only t he acl permissions  ****************** godknowswhat it does even with the ma king of scripts overriding at anypoint what youwant,  the hideous paths of NOT sth not only allow sth not nopermission and itis an opensystem letalone from personnel likely eachintel has a few finger inthe underwear of some regrettably ******** why dont you give anoption hardened security install withallhar denings and optimised mountpoint quirks and noscrips addable fixed and  no h ideous nopaths ******** greetings I am Christian KISS BabyAWACS – Raw I ndependent Sophistication #THINKTANK + #INTEL #HELLHOLE #BLOG https://www.BabyA WACS.com/ [email protected] PHONE / FAX +493212 611 34 64 Helpful? Pay. Su pport. Donnate. paypal.me/ChristianKiss
woa!!! @debian .@debian @planetdebian the #mount points are cleary a key vulnerability in @linux ie mountable over volume, hardwarelink, uniqueidentifier and whatnot. with whocan- frivolously, and to it with godknows which repositories per filesystem andor encryptionsystem dependencies. w o a ****************** n o w o n d e r the pro s setup individual volumes/partitions for each quirky folder…
View On WordPress
0 notes
prevajconsultants · 8 years ago
Text
VB.Net DAL Generator - Source Code (Database Abstractions)
VB.Net DAL Generator is a .net desktop application that generates VB.Net Data Access Layer for SQL Server and MS Access databases. The purpose of this application is to make software development easy. It creates VB.Net classes (one per table) that contain methods for CRUD operations. The generated code can be used in web as well as desktop apps.
If you need C# DAL Generator for SQL Server and MS Access then click here. If you need C# DAL Generator for MySQL then click here. If you need DAL Generator for Entity Framework (C#/VB.Net) then click here. If you need PHP Code Generator for MySQL/MySQLi/PDO then click here.
Video Demo:
Click here to view the video demo.
Features:
It creates VB.Net classes (one for each table).
Supports SQL Server and MS Access.
The class contains all columns of the table as properties.
Data types have been handled nicely.
Creates methods for CRUD operations.
Sorting has been handled.
Pagination has been handled (SQL Server only).
Primary key is automatically detected for each table.
Composite primary key is supported.
Nullable columns have been handled.
Identity column has been handled.
Timestamp column has been handled.
Completely indented code is generated.
The generated code can be used in both desktop and web applications.
All the following data types of SQL Server are supported: char, nchar, varchar, nvarchar, text, ntext, xml, decimal, numeric, money, smallmoney, bit, binary, image, timestamp, varbinary, date, datetime, datetime2, smalldatetime, datetimeoffset, time, bigint, int, smallint, tinyint, float, real, uniqueidentifier, sql_variant
Source code (written in C#) has also been provided so that to enable users to make changes according to their programming structure.
Sample Application:
A sample web application has also been provided that is using the generated code. In this application one form (for employees) has been created. This app uses the generated data access layer without modifying a single line in the generated code.
Generated Code:
VB.Net Class: For each table one VB.Net class is created that contains all columns of the table as properties.
Add Method: It is an instance method. It adds a new record to the relevant table. Nullable columns have been handled properly. If you don’t want to insert values in the nullable columns, don’t specify values for the relevant properties. Identity and timestamp columns cannot be inserted manually therefore these columns are skipped while inserting a record. Relevant property of the identity column is populated after record is inserted.
Update Method: It is an instance method. It updates an existing record. Identity and timestamp columns are skipped while updating a record.
Delete Method: It is a shared method. It deletes an existing record. It takes primary key columns values as parameters.
Get Method: It is a shared method. It gets an existing record (an instance of the class is created and all properties are populated). It takes primary key columns values as parameters.
GetAll Method: It is a shared method. It gets all records of the relevant table. You can also specify search columns. If sorting/pagination is enabled then the relevant code will also be generated.
from CodeCanyon new items http://ift.tt/2sSyjzP via IFTTT https://goo.gl/zxKHwc
0 notes
wwwtandemlabtech · 6 years ago
Text
SQL Server Compact ADO.NET data access performance – part 2: INSERTs
In this second part of this series, I will show by examples how to use the two available ADO.NET APIs in the SQL Server Compact ADO.NET provider for INSERTing data. I will also show some informal performance measurements, but keep in mind that your scenario may give different results. In the sample solution, I will create a data access library for maintaining the following table, which you could be imagine could be used in a caching library: CREATE TABLE CacheElement (     uniqueidentifier NOT NULL,     Tag NVARCHAR(4000) NULL,     Value image NOT NULL,     CreatedAt DATETIME NOT NULL,     ExpirationAt DATETIME NOT NULL); ALTER TABLE     ADD CONSTRAINT PRIMARY KEY (); This table will represent the following class: public class CacheElement {     public Guid Key { get; set; }     public string Tag { get; set; }     public Byte Value { get; set; }     public DateTime CreatedAt { get; set; }     public DateTime ExpirationAt { get; set; } } I will implement the following interface methods in the two possible ways, and then compare implementation and timings, by calling the library from a Unit test project:     public interface IDataAccess     {         void InsertElement(CacheElement element);         void InsertElements(List elements);     } (I will add methods and implement further in the upcoming blog posts) First let us look at the implementation of the T-SQL based INSERT, using best practices with explicit parameters (in the ClassicDal class) : public void InsertElement(CacheElement element) {     using (var command = _connection.CreateCommand())     {         command.Parameters.Add("Key", SqlDbType.UniqueIdentifier).Value = element.Key;         command.Parameters.Add("Value", SqlDbType.Image).Value = element.Value;         command.Parameters.Add("CreatedAt", SqlDbType.DateTime).Value = element.CreatedAt;         command.Parameters.Add("ExpirationAt", SqlDbType.DateTime).Value = element.ExpirationAt;         if (String.IsNullOrWhiteSpace(element.Tag))         {             command.Parameters.Add("Tag", SqlDbType.NVarChar, 4000).Value = DBNull.Value;         }         else         {             command.Parameters.Add("Tag", SqlDbType.NVarChar, 4000).Value = element.Tag;         }         command.CommandText = @"INSERT INTO CacheElement             (,Tag,Value,CreatedAt,ExpirationAt)             VALUES                (@Key, @Tag, @Value, @CreatedAt, @ExpirationAt)";         command.ExecuteNonQuery();     } } For all tests, I am using best practices for SQL Compact connection handling, and passing an already open connection. This avoids measuring the overhead of loading the SQL Compact engine DLL files, and opening the database file. Notice that for NULLable values, special handling has to be made. Now lets us look at the implementation that uses the “raw” APIs for INSERTs (in the RawDAL class) : public void InsertElement(CacheElement element) {     using (var command = _connection.CreateCommand())     {         command.CommandType = CommandType.TableDirect;         command.CommandText = "CacheElement";         using (var resultSet = command.ExecuteResultSet(ResultSetOptions.Updatable))         {             SqlCeUpdatableRecord record = resultSet.CreateRecord();             record.SetGuid(0, element.Key);             if (String.IsNullOrWhiteSpace(element.Tag))             {                 record.SetValue(1, DBNull.Value);             }             else             {                 record.SetString(1, element.Tag);             }             record.SetBytes(2, 0, element.Value, 0, element.Value.Length);             record.SetDateTime(3, element.CreatedAt);             record.SetDateTime(4, element.ExpirationAt);             resultSet.Insert(record);         }     } } Notice the following special pattern: Setting the CommandType to TableDirect, the CommandText is the table name, we use the CreateRecord() method to get a SqlCeUpdateableRecord with “slots” that match our table columns. You have to know the exact “ordinal position” of your columns, you can get that by scripting a CREATE TABLE statement with my Toolbox, or inspecting the INFORMATION_SCHEMA.COLUMNS table. There are typed SetXxx methods that must match the datatype of your columns. Finally, call the Insert method to add the “record”. In the sample code, I have also implemented methods to insert many rows in a single method invocation, thus saving the overhead of recreating a number of objects for each INSERT. This is similar to the way I insert data with my SqlCeBulkCopy library. First the timings for a single INSERT: Raw: 31 ms, Classic: 29 ms – hardly any difference. Then 1000 single INSERTs, minimal object reuse: Raw: 2548 ms, Classic: 1936 – in this case not any advantage of the raw api. Then finally 1000 INSERTs, maximum object reuse: Raw: 73 ms, Classic: 354 ms. A significant difference, if that is a pattern you have in your application, ie inserting many rows in a single call. You can download the code sample from here, and stay tune for the next installment: SELECT (using SetRange and Seek)
0 notes
blockcontech-blog · 6 years ago
Text
SQL Server Compact Code Snippet of the Week #1 : locate the ROWGUIDCOL column in a table
During the next many weeks, I plan to publish a short, weekly blog post with a (hopefully) useful code snippet relating to SQL Server Compact. The code snippets will come from 3 different areas: SQL Server Compact T-SQL statements, ADO.NET code and samples usage of my scripting API. The ROWGUIDCOL column property is defined like this in Books Online: Indicates that the new column is a row global unique identifier column. Only one uniqueidentifier column per table can be designated as the ROWGUIDCOL column. The ROWGUIDCOL property can be assigned only to a uniqueidentifier column. ROWGUIDCOL automatically generates values for new rows inserted into the table. (You can also use a default of NEWID() to automatically assign values to uniqueidentifier columns) The ROWGUIDCOL is used by Merge Replication, all Merge Replicated tables must have a ROWGUIDCOL column. Enough talk, show me the code snippet:SELECT column_flags, column_name, table_name FROM information_schema.columns WHERE column_flags = 378 OR column_flags = 282 I am using the undocumented “column_flags” column to determine the ROWGUIDCOL column, and the reason for the 2 different values is that a uniqueidentifier column can be either NULL or NOT NULL.
0 notes
Text
SQL Server - using newsequentialid() as a function, similar to newid()
This blog post shows how you can use newsequentialid() as a function in scripts etc., not only as a column default value. In many scenarios, unique identifiers are used a clustered, primary keys in database tables for various reasons. This blog post will not discuss the pros and cons of doing this. Usage of GUID/uniqueidentifer and it’s implication on fragmentation, and how newsequentialid() can help improve this, has been documented in various places A limitation of newsequentialid() is that it can only be used as a default value for a column, not as a function, in for example ad-hoc INSERT scripts. By taking advantage of SQLCLR, this situation can be changed.using System; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; using System.Data.SqlClient; public class SqlFunctions { public static SqlGuid newsequentialid() { using (SqlConnection connection = new SqlConnection("context connection=true")) { connection.Open(); var sql = @" DECLARE @NewSequentialId AS TABLE (Id UNIQUEIDENTIFIER DEFAULT(NEWSEQUENTIALID())) INSERT INTO @NewSequentialId DEFAULT VALUES; SELECT Id FROM @NewSequentialId;"; using (SqlCommand cmd = new SqlCommand(sql, connection)) { object idRet = cmd.ExecuteScalar(); return new SqlGuid((Guid)idRet); } } } } The code above implements a SQLCLR function named newsequentialid(), To build this code, simply create a C# class library, include the code, and build. The code is inspired by this thread on SQLServerCentral: http://www.sqlservercentral.com/Forums/Topic1006731-2815-1.aspx To make deploying the function even simpler, the script outlined below can add the assembly code to your database and register the function:EXEC sp_configure @configname=clr_enabled, @configvalue=1; GO RECONFIGURE; GO IF NOT EXISTS (SELECT * FROM sys.assemblies asms WHERE asms.name = N'SqlFunctions' and is_user_defined = 1) CREATE ASSEMBLY FROM 0x4D5A… (rest omitted, use full script) WITH PERMISSION_SET = SAFE IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'.') AND type in (N'FN', N'IF', N'TF', N'FS', N'FT')) BEGIN execute dbo.sp_executesql @statement = N' CREATE FUNCTION .() RETURNS uniqueidentifier AS EXTERNAL NAME ..; ' END GO You can download the full script from here: http://sdrv.ms/1hhYDY1 Testing with 50.000 inserts, like in the CodeProject article reveals the following figures: Newsequentialid as DEFAULT: Run time: 1:18, pages: 1725, fragmentation: 0,7 % Newsequentialid as function in INSERT statement, no default value on table: Run time: 2:03, pages: 1725, fragementation: 0,7 % To use the function as a replacement for newid(), simply use dbo.newsequentialid() instead. But please also consider using another column as clustering key in your table…
0 notes
cyberponictech-blog · 6 years ago
Text
SQL Server Compact ADO.NET data access performance – part 2: INSERTs
In this second part of this series, I will show by examples how to use the two available ADO.NET APIs in the SQL Server Compact ADO.NET provider for INSERTing data. I will also show some informal performance measurements, but keep in mind that your scenario may give different results. In the sample solution, I will create a data access library for maintaining the following table, which you could be imagine could be used in a caching library: CREATE TABLE CacheElement (     uniqueidentifier NOT NULL,     Tag NVARCHAR(4000) NULL,     Value image NOT NULL,     CreatedAt DATETIME NOT NULL,     ExpirationAt DATETIME NOT NULL); ALTER TABLE     ADD CONSTRAINT PRIMARY KEY (); This table will represent the following class: public class CacheElement {     public Guid Key { get; set; }     public string Tag { get; set; }     public Byte Value { get; set; }     public DateTime CreatedAt { get; set; }     public DateTime ExpirationAt { get; set; } } I will implement the following interface methods in the two possible ways, and then compare implementation and timings, by calling the library from a Unit test project:     public interface IDataAccess     {         void InsertElement(CacheElement element);         void InsertElements(List elements);     } (I will add methods and implement further in the upcoming blog posts) First let us look at the implementation of the T-SQL based INSERT, using best practices with explicit parameters (in the ClassicDal class) : public void InsertElement(CacheElement element) {     using (var command = _connection.CreateCommand())     {         command.Parameters.Add("Key", SqlDbType.UniqueIdentifier).Value = element.Key;         command.Parameters.Add("Value", SqlDbType.Image).Value = element.Value;         command.Parameters.Add("CreatedAt", SqlDbType.DateTime).Value = element.CreatedAt;         command.Parameters.Add("ExpirationAt", SqlDbType.DateTime).Value = element.ExpirationAt;         if (String.IsNullOrWhiteSpace(element.Tag))         {             command.Parameters.Add("Tag", SqlDbType.NVarChar, 4000).Value = DBNull.Value;         }         else         {             command.Parameters.Add("Tag", SqlDbType.NVarChar, 4000).Value = element.Tag;         }         command.CommandText = @"INSERT INTO CacheElement      ��      (,Tag,Value,CreatedAt,ExpirationAt)             VALUES                (@Key, @Tag, @Value, @CreatedAt, @ExpirationAt)";         command.ExecuteNonQuery();     } } For all tests, I am using best practices for SQL Compact connection handling, and passing an already open connection. This avoids measuring the overhead of loading the SQL Compact engine DLL files, and opening the database file. Notice that for NULLable values, special handling has to be made. Now lets us look at the implementation that uses the “raw” APIs for INSERTs (in the RawDAL class) : public void InsertElement(CacheElement element) {     using (var command = _connection.CreateCommand())     {         command.CommandType = CommandType.TableDirect;         command.CommandText = "CacheElement";         using (var resultSet = command.ExecuteResultSet(ResultSetOptions.Updatable))         {             SqlCeUpdatableRecord record = resultSet.CreateRecord();             record.SetGuid(0, element.Key);             if (String.IsNullOrWhiteSpace(element.Tag))             {                 record.SetValue(1, DBNull.Value);             }             else             {                 record.SetString(1, element.Tag);             }             record.SetBytes(2, 0, element.Value, 0, element.Value.Length);             record.SetDateTime(3, element.CreatedAt);             record.SetDateTime(4, element.ExpirationAt);             resultSet.Insert(record);         }     } } Notice the following special pattern: Setting the CommandType to TableDirect, the CommandText is the table name, we use the CreateRecord() method to get a SqlCeUpdateableRecord with “slots” that match our table columns. You have to know the exact “ordinal position” of your columns, you can get that by scripting a CREATE TABLE statement with my Toolbox, or inspecting the INFORMATION_SCHEMA.COLUMNS table. There are typed SetXxx methods that must match the datatype of your columns. Finally, call the Insert method to add the “record”. In the sample code, I have also implemented methods to insert many rows in a single method invocation, thus saving the overhead of recreating a number of objects for each INSERT. This is similar to the way I insert data with my SqlCeBulkCopy library. First the timings for a single INSERT: Raw: 31 ms, Classic: 29 ms – hardly any difference. Then 1000 single INSERTs, minimal object reuse: Raw: 2548 ms, Classic: 1936 – in this case not any advantage of the raw api. Then finally 1000 INSERTs, maximum object reuse: Raw: 73 ms, Classic: 354 ms. A significant difference, if that is a pattern you have in your application, ie inserting many rows in a single call. You can download the code sample from here, and stay tune for the next installment: SELECT (using SetRange and Seek)
0 notes
deplloyer · 6 years ago
Text
Release: SQL Server Migration Assistant (SSMA) v8.0
Overview
SQL Server Migration Assistant (SSMA) for Oracle, MySQL, SAP ASE (formerly SAP Sybase ASE), DB2, and Access allows users to convert a database schema to a Microsoft SQL Server schema, deploy the schema, and then migrate data to the target SQL Server (see below for supported versions).
What's new?
Each iteration of the SSMA tool has been enhanced with targeted fixes that improve quality and conversion metrics. In addition, this release offers the following new features:
Support Azure SQL Database Managed Instance as a target. You can now create new projects targeting Azure SQL Database Managed Instance:
Note: The SSMA for Oracle Extension Pack was also updated to allow remote installations on Azure SQL Database Managed Instance:
Some features, including Tester and Server-side data migration, are not supported when targeting Azure SQL Database Managed Instance. Read more here.
Post-conversion Fix advisor. Learn more about it here.
Preliminary database/schema selection.
When connecting to source user is now able to select databases/schemas of interest. Selecting only the schemas you plan to migrate will save time during initial connection and improve overall SSMA performance.
Finally, SSMA for Oracle has been enhanced to:
Use the official managed NET driver to connect to Oracle. The OCI driver is no longer a prerequisite for using SQL Server Migration Assistant for Oracle.
Map ROWID and UROWID to VARCHAR by default. Changed from ‘uniqueidentifier’ to accommodate data migration for explicit ROWID columns.
Downloads
SSMA for Access
SSMA for DB2
SSMA for MySQL
SSMA for Oracle
SSMA for SAP ASE
Supported source and target versions
Source: For the list of supported sources, please review the information on the Download Center for each of the above SQL Server Migration Assistant downloads.
Target: SQL Server 2012, SQL Server 2014, SQL Server 2016, SQL Server 2017, SQL Server 2019, Azure SQL Database, Azure SQL Database Managed Instance, and Azure SQL Data Warehouse*.
*Azure SQL Data Warehouse is supported as a target only when using SSMA for Oracle.
Resources
SQL Server Migration Assistant documentation
SQL Server Migration Assistant: How to assess and migrate database(s) from non-Microsoft data platforms to SQL Server
Extending SQL Server Migration Assistant’s conversion capabilities
from Microsoft Data Migration Blog http://bit.ly/2GwChFv via IFTTT
0 notes
alanlcole · 7 years ago
Text
How To Find Random Record In SQL Server
In this blog. I will explain you how to find random record in SQL server. This is most common interview question. The CHECKSUM function returns the checksum value computed over a table row, or over an expression list. Use CHECKSUM to build hash indexes. A hash index will result if the CHECKSUM function has column arguments, and an index is built over the computed CHECKSUM value. This can be used for equality searches over the columns. SQL NEWID function is used for selecting random row from a result set in SQL Server databases. NEWID is use to assign a value to a variable declared as the uniqueidentifier data type. source https://www.c-sharpcorner.com/Blogs/how-to-find-random-record-in-sql-server from C Sharp Corner https://ift.tt/2PhFw7J
0 notes
csharpcorner · 7 years ago
Text
How To Find Random Record In SQL Server
In this blog. I will explain you how to find random record in SQL server. This is most common interview question. The CHECKSUM function returns the checksum value computed over a table row, or over an expression list. Use CHECKSUM to build hash indexes. A hash index will result if the CHECKSUM function has column arguments, and an index is built over the computed CHECKSUM value. This can be used for equality searches over the columns. SQL NEWID function is used for selecting random row from a result set in SQL Server databases. NEWID is use to assign a value to a variable declared as the uniqueidentifier data type. from C-Sharpcorner Latest Content https://ift.tt/2RcZu0M
0 notes
just4programmers · 7 years ago
Text
SQL Server on Linux or in Docker plus cross-platform SQL Operations Studio
I recently met some folks that didn't know that SQL Server 2017 also runs on Linux but they really needed to know. They had a single Windows desktop and a single Windows Server that they were keeping around to run SQL Server. They had long-been a Linux shop and was now fully containerzed...except for this machine under Anna's desk. (I assume The Cloud is next...pro tip: Don't have important servers under your desk). You can even get a license first and decide on the platform later.
You can run SQL Server on a few Linux flavors...
Install on Red Hat Enterprise Linux
Install on SUSE Linux Enterprise Server
Install on Ubuntu
or, even better, run it on Docker...
Run on Docker
Of course you'll want to do the appropriate volume mapping to keep your database on durable storage. I'm digging being able to spin up a full SQL Server inside a container on my Windows machine with no install.
I've got Docker for Windows on my laptop and I'm using Shayne Boyer's "Docker Why" repo to make the point. Look at his sample DockerCompose that includes both a web frontend and a backend using SQL Server on Linux.
version: '3.0' services: mssql: image: microsoft/mssql-server-linux:latest container_name: db ports: - 1433:1433 volumes: - /var/opt/mssql # we copy our scripts onto the container - ./sql:/usr/src/app # bash will be executed from that path, our scripts folder working_dir: /usr/src/app # run the entrypoint.sh that will import the data AND sqlserver command: sh -c ' chmod +x ./start.sh; ./start.sh & /opt/mssql/bin/sqlservr;' environment: ACCEPT_EULA: 'Y' SA_PASSWORD: P@$$w0rdP@$$w0rd
Note his starting command where he's doing an initial population of the database with sample data, then running sqlservr itself. The SQL Server on Linux Docker container includes the "sqlcmd" command line so you can set up the database, maintain it, etc with the same command line you've used on Windows. You can also configure SQL Server from Environment Variables so it makes it easy to use within Docker/Kubernetes. It'll take just a few minutes to get going.
Example:
/opt/mssql-tools/bin/sqlcmd -S localhost -d Names -U SA -P $SA_PASSWORD -I -Q "ALTER TABLE Names ADD ID UniqueIdentifier DEFAULT newid() NOT NULL;"
I cloned his repo (and I have .NET Core 2.1) and did a "docker-compose up" and boom, running a front end under Alpine and backend with SQL Server on Linux.
101→ C:\Users\scott> docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e5b4dae93f6d namesweb "dotnet namesweb.dll" 38 minutes ago Up 38 minutes 0.0.0.0:57270->80/tcp, 0.0.0.0:44348->443/tcp src_namesweb_1 5ddffb76f9f9 microsoft/mssql-server-linux:latest "sh -c ' chmod +x ./…" 41 minutes ago Up 39 minutes 0.0.0.0:1433->1433/tcp mssql
Command lines are nice, but SQL Server is known for SQL Server Management Studio, a nice GUI for Windows. Did they release SQL Server on Linux and then expect everyone use Windows to manage it? I say nay nay! Check out the cross-platform and open source SQL Operations Studio, "a data management tool that enables working with SQL Server, Azure SQL DB and SQL DW from Windows, macOS and Linux." You can download SQL Operations Studio free here.
SQL Ops Studio is really impressive. Here I am querying SQL Server on Linux running within my Docker container on my Windows laptop.
As I'm digging in and learning how far cross-platform SQL Server has come, I also checked out the mssql extension for Visual Studio Code that lets you develop and execute SQL against any SQL Server. The VS Code SQL Server Extension is also open source!
Go check it SQL Server in Docker at https://github.com/Microsoft/mssql-docker and try Shayne's sample at https://github.com/spboyer/docker-why
Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today!
© 2018 Scott Hanselman. All rights reserved.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
0 notes