#Timothy Valihora
Explore tagged Tumblr posts
timothyvalihora · 5 days ago
Text
How FastTrack Translates Business Rules into ETL Logic
Tumblr media
Data integration projects often stall at the intersection of business rules and technical implementation. While business users define what needs to happen, developers handle how to make it work through extract, transform, and load (ETL) logic. In this handoff, ambiguity and miscommunication may lead to delays and rework. Tools that translate business terms into technical structure can reduce confusion and speed up delivery.
FastTrack bridges this gap by letting business users define transformation rules in plain language. These inputs are then converted into ETL job templates. Instead of passing specifications between teams, everyone works within a shared interface that captures both business intent and technical metadata. This alignment keeps the process efficient and reduces friction between planning and execution.
Using structured templates and a shared metadata model, FastTrack minimizes mismatches between what stakeholders expect and what developers build. Business terms like “customer segment” or “transaction status” link directly to source and target fields, complete with transformation logic. This setup removes the need for manual rewrites or repeated clarification, helping teams stay on track through each stage.
Take a compliance reporting example. A business analyst defines revenue classification logic based on product categories and specific thresholds. In a typical workflow, this might be shared through spreadsheets or meetings, which opens the door to interpretation. With FastTrack, the logic is entered once and converted directly into the ETL structure, keeping the analyst’s intent intact throughout development.
This approach also strengthens traceability. FastTrack links source columns, business terms, and target definitions, keeping every transformation visible. That transparency supports both governance and audits. When rules must comply with data privacy or classification policies, encoding them in metadata makes oversight more straightforward.
FastTrack connects with governance tools as well. Once logic is embedded into job templates, stewardship teams can validate mappings, manage approvals, and track changes over time. This workflow ensures accountability from initial input to final deployment and helps limit risk introduced by gaps in oversight. Business logic and data quality remain linked throughout the lifecycle.
While focused on clarity for business teams, FastTrack does not restrict developers. The templates act as a starting point, giving technical teams room to refine or optimize for performance. Organizations gain a consistent base for repeatable logic while still allowing for adjustments in more complex scenarios. This balance enables both standardization and customization at scale.
In large enterprises, FastTrack helps synchronize data efforts across departments. Centralizing transformation logic reduces duplication and supports reusable patterns. Teams can work from a common foundation for tasks like account reconciliation, pipeline reporting, or inventory tracking. Coordination across units strengthens both project velocity and long-term data integrity.
One example comes from a retail group unifying customer data from different platforms. Business analysts create matching rules for names, contacts, and account IDs using FastTrack. The templates are then used across regions, keeping logic consistent while adapting to local systems. This coordination improves data quality and shortens development cycles across business units.
Long-term, this logic-first model pays off by cutting rework and improving documentation. Since transformation rules are captured upfront and persist through deployment, future teams gain full context behind past decisions. That continuity supports faster planning and stronger execution in evolving environments. By strengthening visibility and intent alignment, FastTrack contributes to more resilient data operations.
0 notes
timothyvalihora · 21 days ago
Text
Governance Without Boundaries - CP4D and Red Hat Integration
Tumblr media
The rising complexity of hybrid and multi-cloud environments calls for stronger and more unified data governance. When systems operate in isolation, they introduce risks, make compliance harder, and slow down decision-making. As digital ecosystems expand, consistent governance across infrastructure becomes more than a goal, it becomes a necessity. A cohesive strategy helps maintain control as platforms and regions scale together.
IBM Cloud Pak for Data (CP4D), working alongside Red Hat OpenShift, offers a container-based platform that addresses these challenges head-on. That setup makes it easier to scale governance consistently, no matter the environment. With container orchestration in place, governance rules stay enforced regardless of where the data lives. This alignment helps prevent policy drift and supports data integrity in high-compliance sectors.
Watson Knowledge Catalog (WKC) sits at the heart of CP4D’s governance tools, offering features for data discovery, classification, and controlled access. WKC lets teams organize assets, apply consistent metadata, and manage permissions across hybrid or multi-cloud systems. Centralized oversight reduces complexity and brings transparency to how data is used. It also facilitates collaboration by giving teams a shared framework for managing data responsibilities.
Red Hat OpenShift brings added flexibility by letting services like data lineage, cataloging, and enforcement run in modular, scalable containers. These components adjust to different workloads and grow as demand increases. That level of adaptability is key for teams managing dynamic operations across multiple functions. This flexibility ensures governance processes can evolve alongside changing application architectures.
Kubernetes, which powers OpenShift’s orchestration, takes on governance operations through automated workload scheduling and smart resource use. Its automation ensures steady performance while still meeting privacy and audit standards. By handling deployment and scaling behind the scenes, it reduces the burden on IT teams. With fewer manual tasks, organizations can focus more on long-term strategy.
A global business responding to data subject access requests (DSARs) across different jurisdictions can use CP4D to streamline the entire process. These built-in tools support compliant responses under GDPR, CCPA, and other regulatory frameworks. Faster identification and retrieval of relevant data helps reduce penalties while improving public trust.
CP4D’s tools for discovering and classifying data work across formats, from real-time streams to long-term storage. They help organizations identify sensitive content, apply safeguards, and stay aligned with privacy rules. Automation cuts down on human error and reinforces sound data handling practices. As data volumes grow, these automated capabilities help maintain performance and consistency.
Lineage tracking offers a clear view of how data moves through DevOps workflows and analytics pipelines. By following its origin, transformation, and application, teams can trace issues, confirm quality, and document compliance. CP4D’s built-in tools make it easier to maintain trust in how data is handled across environments.
Tight integration with enterprise identity and access management (IAM) systems strengthens governance through precise controls. It ensures only the right people have access to sensitive data, aligning with internal security frameworks. Centralized identity systems also simplify onboarding, access changes, and audit trails.
When governance tools are built into the data lifecycle from the beginning, compliance becomes part of the system. It is not something added later. This helps avoid retroactive fixes and supports responsible practices from day one. Governance shifts from a task to a foundation of how data is managed.
As regulations multiply and workloads shift, scalable governance is no longer a luxury. It is a requirement. Open, container-driven architectures give organizations the flexibility to meet evolving standards, secure their data, and adapt quickly.
0 notes
timothyvalihora · 1 month ago
Text
Best Practices to Protect Personal Data in 2024
Tumblr media
In today’s digital landscape, protecting personally identifiable information (PII) demands attention. Individuals and businesses face a growing number of data breaches and cyberattacks, as well as strict data privacy laws. To keep PII secure, you must apply clear, effective cybersecurity strategies, including the following.
Start by focusing on data minimization. Only collect the PII essential to your operations, and avoid storing or asking for data you don’t need. For example, “refrain from requesting an individual's Social Security number if it is unnecessary.” Keeping less data on hand reduces the risk of exposure during a cyber incident.
One other consideration, it's only "PII" if you have more than 1 tidbit…for example, if you know my DOB yet NOT my mothers maiden name or my current address? Suddenly, "PII" is less threatening. Mr. Valihora can coach an organization on how to identify "PII" - in terms of how it's stored, and also develop a "Data Masking" strategy in order that not enough pieces of the puzzle - are available for potential data breaches or threats etc.
Tim Valihora is an expert on: Cloud PAK for Data (CP4D) v3.x, v4.x, v5.1 IBM InfoSphere Information Server (over 200 successful installs of IIS.) Information Governance Catalog Information Governance Dashboard FastTrack(tm) Information Analyzer SAP PACK for DS/QS DS "Ready To Launch" (RTL) DS SAP PACK for SAP Business Warehouse IBM IIS "Rest API" IBM IIS "DSODB" IBM Business Process Manager (BPM) MettleCI DataStage DevOps Red Hat Open Shift Control Plane Watson Knowledge Catalog Enterprise Search Data Quality Data Masking PACK for DataStage + QualityStage OPTIM Data Masking CASS - Postal Address Certification SERP - Postal Address Certification QualityStage (QS) Matching strategies + Data Standardization / Cleansing DataStage GRID Toolkit (GTK) installs
Mr. Valihora has more than 200 successful IBM IIS installs in his career and worked with 120 satisfied IBM IIS clients.
Encrypt all sensitive PII, whether it moves through systems or stays stored. Encryption blocks unauthorized access to the data without the decryption key. Use strong encryption protocols like AES-256 to keep PII private.
Apply firm access controls to limit who can interact with PII. Grant access only to those who need it. Use role-based access controls (RBAC) and multi-factor authentication (MFA) to ensure that only authorized personnel have access to or control over sensitive data. In addition, keep audit logs to track any access or changes, and hold individuals accountable.
Finally, carry out regular risk assessments and data audits. These reviews help you identify weak spots and confirm that your data practices align with current privacy regulations. By assessing risk, you can detect areas where PII may be at risk and apply proper safeguards.
Tim Valihora currently resides in Vero Bech, FL - and also enjoys golf, darts, tennis and guitar playing - during work outages!
0 notes
timothyvalihora · 2 months ago
Text
Data Governance and Data Management - Why Both Matter
Tumblr media
Distinct yet interdependent, data governance and data management must be understood and differentiated to craft effective data initiatives for an organization.
Data Governance Explained
Data governance involves creating and implementing policies, standards, and responsibilities that dictate how data is used and handled in an organization. It addresses critical questions such as: Who owns or is accountable for the data? What measures are in place to secure, protect and control data access?
Governance frameworks, for example, may comply with regulations by setting up strict permissions, allowing only authorized individuals to access sensitive information.
Data Management Defined
Data management focuses on the operational side of data processes including storage, integration, and retrieval. It ensures that data remains accessible, accurate, organized, and ready for use.
Governance Versus Management
In essence, governance establishes the rules, management carries them out. Governance covers accountability, compliance, and oversight, while management addresses the processes and operational execution of data tasks. Last, governance relies on frameworks and policies, and management uses technology such as data processing systems and analytics tools to manage and analyze data.
Governance and Management
Data governance and management are closely connected and mutually dependent. Without governance, data management may lose direction and accountability and may lead to errors and inconsistent use of data. In the same way, without effective management, governance policies remain as theories and will not be enforced. Together, they ensure that data is secured, organized, and leveraged strategically while maintaining high levels of security and compliance.
0 notes
timothyvalihora · 2 months ago
Text
Modern Tools Enhance Data Governance and PII Management Compliance
Tumblr media
Modern data governance focuses on effectively managing Personally Identifiable Information (PII). Tools like IBM Cloud Pak for Data (CP4D), Red Hat OpenShift, and Kubernetes provide organizations with comprehensive solutions to navigate complex regulatory requirements, including GDPR (General Data Protection Regulation) and CCPA (California Consumer Privacy Act). These platforms offer secure data handling, lineage tracking, and governance automation, helping businesses stay compliant while deriving value from their data.
PII management involves identifying, protecting, and ensuring the lawful use of sensitive data. Key requirements such as transparency, consent, and safeguards are essential to mitigate risks like breaches or misuse. IBM Cloud Pak for Data integrates governance, lineage tracking, and AI-driven insights into a unified framework, simplifying metadata management and ensuring compliance. It also enables self-service access to data catalogs, making it easier for authorized users to access and manage sensitive data securely.
Advanced IBM Cloud Pak for Data features include automated policy reinforcement and role-based access that ensure that PII remains protected while supporting analytics and machine learning applications. This approach simplifies compliance, minimizing the manual workload typically associated with regulatory adherence.
The growing adoption of multi-cloud environments has necessitated the development of platforms such as Informatica and Collibra to offer complementary governance tools that enhance PII protection. These solutions use AI-supported insights, automated data lineage, and centralized policy management to help organizations seeking to improve their data governance frameworks.
Mr. Valihora has extensive experience with IBM InfoSphere Information Server “MicroServices” products (which are built upon Red Hat Enterprise Linux Technology – in conjunction with Docker\Kubernetes.) Tim Valihora - President of TVMG Consulting Inc. - has extensive experience with respect to:
IBM InfoSphere Information Server “Traditional” (IIS v11.7.x)
IBM Cloud PAK for Data (CP4D)
IBM “DataStage Anywhere”
Mr. Valihora is a US based (Vero Beach, FL) Data Governance specialist within the IBM InfoSphere Information Server (IIS) software suite and is also Cloud Certified on Collibra Data Governance Center.
Career Highlights Include: Technical Architecture, IIS installations, post-install-configuration, SDLC mentoring, ETL programming, performance-tuning, client-side training (including administrators, developers or business analysis) on all of the over 15 out-of-the-box IBM IIS products Over 180 Successful IBM IIS installs - Including the GRID Tool-Kit for DataStage (GTK), MPP, SMP, Multiple-Engines, Clustered Xmeta, Clustered WAS, Active-Passive Mirroring and Oracle Real Application Clustered “IADB” or “Xmeta” configurations. Tim Valihora has been credited with performance tuning the words fastest DataStage job which clocked in at 1.27 Billion rows of inserts\updates every 12 minutes (using the Dynamic Grid ToolKit (GTK) for DataStage (DS) with a configuration file that utilized 8 compute-nodes - each with 12 CPU cores and 64 GB of RAM.)
0 notes
timothyvalihora · 3 months ago
Text
A Brief Overview of Personally Identifiable Information
Tumblr media
Given the widespread presence of Personally Identifiable Information (PII), organizations must protect it. To protect against threats and avoid attacks, regulator fines, and loss of customer trust, the organization must first identify all the PII it collects, processes, and uses. This enables proper security planning and the development of a robust privacy protection strategy.
PII includes any data that can identify an individual, such as name, date of birth, phone number, mailing address, email address, and Social Security number (SSN). It also covers IP addresses, login IDs, personally identifiable financial information (PIFI), and social media posts.
Significant privacy and security challenges are associated with sensitive PII (such as passport, driver’s license, or SSN) and non-sensitive PII (such as ethnicity, gender, or zip code).
Identifying all PII storage locations, including servers, cloud infrastructure, and employee laptops, is essential. Customer relationship management (CRM) platforms and Software-as-a-Service (SaaS) tools store, send and receive PII.
Classifying PII according to its sensitivity helps prioritize which data and systems require more protection. Different types of PII carry different levels of risk if exposed, and recognizing these differences is key to effective data security.
0 notes
timothyvalihora · 7 months ago
Text
A Brief Overview of Data Governance
Tumblr media
Data governance manages an organization’s data to improve its quality, security, and availability. It ensures data integrity and security by defining and implementing policies, standards, and procedures for data collection, ownership, storage, processing, and deployment. The primary goal of data governance is to maintain secure, high-quality data that is easily accessible for data discovery and business intelligence (BI) purposes.
Important enablers of data governance initiatives include artificial intelligence (AI), big data, and digital transformation. The growing volume of data from sources like the Internet of Things (IoT) requires organizations to reconsider their data management practices to enhance BI capabilities.
Data governance is part of the broader discipline of data management, which involves collecting, processing, and using data securely and efficiently to support decision-making and improve business outcomes. Other aspects of data management include the data lifecycle, processing, storage, and security.
Organizations benefit from a data governance framework to organize and process their critical data assets. An effective framework should align with an organization’s specific data systems, sources, industry standards, and regulatory requirements, covering program goals, roles and responsibilities, data standards, policies, processes, auditing procedures, and governance tools.
0 notes
timothyvalihora · 8 months ago
Text
Importance of Data Lineage in an Organization
Tumblr media
Data lineage involves understanding, documenting, and illustrating the journey of data as it moves from its origin to its point of use. It shows a data lifecycle from its starting point to when it is finally used, as well as the processes the data passes through before it is used. Data lineage is crucial for managing changes in the data environment, including software updates, data transfers, and alterations to database structures. Understanding how dependent data is allows organizations to plan and organize their changes. It is also important because it helps in recovering data. Organizations can easily identify and organize their data resources more efficiently using data lineage. This is possible due to experts' ability to locate the origin of the data. It also helps ensure that the data is valid and accurate. This is possible because it allows users to trace its flow from origin to destination, both backward and forward, to identify and address any discrepancies.
0 notes
timothyvalihora · 9 months ago
Text
Why Your Organization Needs a Data Steward
Tumblr media
Data stewardship ensures that your organization's data resources are always secure, properly used, reliable, and available. A data steward usually ensures that your organization's data aligns with its data governance policies.
Your organization needs a data steward because they organize and manage every area related to data accuracy and integrity. Data integrity and accuracy are very important when a sensitive customer is involved to avoid such data getting to an unauthorized party.
Having a data steward in your organization also helps produce high-quality data. They monitor the source, storage, and usage of data and oversee its entire journey, ensuring they know its source, where it is kept, and how it is utilized. Doing all these leads to high-quality data.
Lastly, having data stewards in your organization can help you make better decisions. Since data stewards have a deep knowledge of the technical aspects involved in generating, storing, and handling data, they are well-equipped to assist various departments in maximizing the value of the available data.
0 notes
timothyvalihora · 2 years ago
Text
An Overview of the IBM Infosphere Information Server
Tumblr media
Carleton University alumnus Timothy Valihora is a resident of Vero Beach, Florida. Timothy Valihora serves has a consultant for the IBM Infosphere Information Server (IIS) software stack and has worked for well over 80 clients worldwide and has over 25 years of IT experience.
The IBM Infosphere Information Server is a platform for data integration that enables easier understanding, cleansing, monitoring, and transforming of data. It helps organizations and businesses to understand information from a variety of sources. With the Infosphere Information Server, these organizations are able to drive innovation and lower risk.
IBM Infosphere Information Server suite comprised of numerous components. These components perform different functions in information integration and form the building blocks necessary to deliver information across the organization. The components include IBM Infosphere Information Governance Catalog (IGC), IBM Infosphere DataStage (DS) and QualityStage (QS), IBM Infosphere Information Analyzer (IA), and IBM Infosphere Services Director (ISD.) In addition, the Infosphere Information Server suite of products - provides offerings to meet the business needs of organizations. They include InfoSphere Information Server Enterprise Edition (PX) and InfoSphere Information Server for Data Quality & Data Governance etc. The latest version of the Infosphere Server, Version 11.7.1.4, includes changes to features of the Information Server Web Console and the Microservices tier (Watson Knowledge Catalog as well as the Information Server Enterprise Search and Infosphere Information Analyzer. The latest version also supports managing data rules and creating quality rules etc.
Career Highlights for Tim Valihora Include:
Technical Architecture, IIS installations, post-install-configuration, SDLC mentoring, ETL programming, performance-tuning, client-side training (including administrators, developers or business analysis) on all of the over 15 out-of-the-box IBM IIS (InfoSphere Information Server) products
Over 160 Successful IBM IIS installs - Including the GRID Tool-Kit for DataStage (GTK), MPP, SMP, Multiple-Engines, Clustered Xmeta, Clustered WAS, Active-Passive (Server) "Mirroring" and Oracle Real Application Clustered (RAC) “IADB” or “Xmeta” configurations
Extensive experience with creating realistic and achievable Disaster-Recovery (DR) for IBM IIS installations + Collibra Data Quality clusters
IBM MicroServices (MS) (built upon Red Hat Open-Shift (RHOS) and Kubernetes Technology) installations and administration including Information Governance Catalog (IGC) “New”, Information Analyzer (IA) “thin”, Watson Knowledge Catalog (WKC) and Enterprise Search (ES) – on IBM Cloud PAK for Data (CP4D) platforms or IIS v11.7.1.4 “on-prem”
Over 8000 DataStage and QualityStage ETL Jobs Coded
Address Certification (WAVES, CASS, SERP, Address Doctor, Experian QAS)
Real-Time coding and mentoring via IBM IIS Information Services Director (ISD)
IIS IGC Rest-API coding (including custom audit coding for what has changed within IGC recently…or training on the IGC rest-explorer API)
IGC “Classic” and IGC “New” – Data Lineage via Extension Mapping Documents or XML “Flow-Docs”
IBM Business Process Manager (BPM) for Custom Workflows (including Data Quality rules + IGC Glossary Publishing etc.)
Information Analyzer (IA) Data Rules (via IA or QualityStage – in batch or real-time)
IBM IIS Stewardship Center installation and Configuration (BPM)
Data Quality Exception Console (DQEC) setup and configuration
IGC Glossary Publishing Remediation Workflows (BPM, Stewardship Center, Subscription Manager)
Tim Valihora has also logged over 2500 hours of consulting with respect to migrations from IBM IIS v11.7.x to IBM Cloud Pak for Data (CP4D) and specializes in upgrades within IIS various versions and from IIS to CP4D accordingly…
In terms of hobbies - Tim Valihora - When not in the office - enjoys playing guitar (namely Jackson, Signature, Paul Reed Smith and Takamine), drums, squash, tennis, golf and riding his KTM 1290 Super Adventure "R", BMW 1250 GS Adventure and Ducati MultiStrada V4S motorcycles. Mr. Valihora is also an avid fisherman and enjoys spending time with his English Golden Retriever (Lilli.)
0 notes
timothyvalihora · 2 years ago
Text
Data Quality Management Benefits of IBM Infosphere Information Server
Timothy Valihora studied at Carleton University in Ottawa, Canada, and graduated with a bachelor’s degree in economics in 1994. Over the years, he has achieved multiple professional certifications ranging from Oracle Advanced SQL and SQL*Plus to Collibra Data Quality. With over 25 years of expertise, Timothy Valihora excels in IBM InfoSphere Information Server (IIS), offering comprehensive…
Tumblr media
View On WordPress
0 notes
timothyvalihora · 2 years ago
Text
The Importance of Protecting Personally Identifiable Information (PII)
Tumblr media
Timothy Valihora is a resident of Vero Beach, Florida, an enterprise architect, and the president of TVMG Consulting Inc. in Ottawa. He has over 25 years working in information technology. He is an experienced consultant in architecture, health checks, installs, programming, mentoring, and performance tuning, among other areas. Timothy Valihora is also skilled in managing personal identifiable information (PII).
Personally Identifiable Information (PII) refers to any information or combination of data used to identify an individual. PII includes anything from sensitive information like biometric data and Social Security number to less direct information such as race, geographical location, and gender.
E-commerce businesses, social media algorithms, and Internet sites collect this information and analyze it to better understand and interact with their consumers, and tailor the information or products based on the collected information, which leads to more satisfied customers. However, exposure of PII poses significant risks, primarily in identity theft and fraud, financial and reputation damage, and potential legal consequences.
Firms that fail to protect PII risk legal liability, regulatory fines, and consumer distrust. Compliance with data protection rules and regulations, such as the General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA), is critical for avoiding legal ramifications.
Safeguarding PII to retain trust and integrity in personal and professional interactions is critical. Individuals and organizations must establish strong security measures like encryption, firewalls, and access limits, as well as train staff in data privacy protocol. It is also crucial to keep software up to date and patch vulnerabilities as soon as possible.
Individuals and organizations can reduce the dangers of identity theft, financial loss, and legal issues by securing PII. Furthermore, these actions indicate a commitment to upholding private rights and adhering to legal requirements. Finally, protecting PII creates trust, boosts customer confidence, and creates a secure digital ecosystem for all parties involved.
0 notes
timothyvalihora · 2 years ago
Text
How to Safeguard Personally Identifiable Information (PII)
Tumblr media
Timothy Valihora is a Florida-based executive who graduated from Carleton University in Ottawa, where he studied math, computer science, commerce, and film. A certified quality engineer, Timothy Valihora specializes in technical architecture, data quality, and data governance, including personal identifiable information (PII) management.
As the name suggests, personal identifiable information (PII) includes information such as names, phone numbers, and addresses that can help identify or be used to contact a person, directly or indirectly. In the era of modern technology, this supply of information transforms how companies operate and interact with customers. However, as companies collect and store PII, they face security risks from infiltration by hackers. Thus, a number of measures are important in safeguarding such information.
One of the ways to do this is by encrypting all sensitive data, including information like Social Security, passport, and driver’s license numbers. Encryption involves transferring information into a coded format that one can only access through a specific digital key. Another method is using multi-factor authentication, a security protocol that combines different forms of authentication required for a person to access software. By adding an extra layer of security, the system is strengthened against security breaches.
Last, it is considered a best practice to properly and diligently delete all old, unneeded PII, such as records of former customers or employees who have left the organization. Organizations are often tempted to hold on to as much data as possible. However, retaining sensitive data that is unnecessary will make any data breach that much more difficult to correct.
0 notes
timothyvalihora · 2 years ago
Text
How Data Stewards Standardize Data Definitions across an Organization
An expert in IBM InfoSphere Information Server (IIS) technology, Timothy Valihora specializes in architecture, health checks, installs, programming, and mentoring. He has over 25 years of work experience in data quality, warehousing, conversions, and governance. As president of TVMG Consulting Inc., Timothy Valihora is an expert in “high availability” installs of IBM IIS and extensive use of the…
Tumblr media
View On WordPress
0 notes
timothyvalihora · 2 years ago
Text
Managing Personally Identifiable Information
Vero Beach, Florida resident Timothy Valihora graduated from Carleton University, where he studied math, computer science, commerce, and film. For over 25 years, he has provided tech consultation services through his company, TVMG Consulting Inc. Timothy Valihora is keenly interested in protecting personally identifiable information. Personally identifiable information (PII) is any data that…
Tumblr media
View On WordPress
0 notes
timothyvalihora · 2 years ago
Text
Data Governance Programs - How to Get Started
Tumblr media
Timothy Valihora is a technical architect, an IBM Information Server (IIS) expert, and the president of TVMG Consulting, Inc. An alumnus of Carleton University, Timothy Valihora is experienced in IIS-related installations, upgrades, patches, and fix packs. He is also experienced in data governance, having worked on numerous data governance projects. Data governance refers to the management of the availability, usability, integrity and security of the data in an enterprise system. Ideally, a data governance program should begin with the executives of an establishment accepting and understanding their key roles. Given the long-term complexity of a data governance program, an organization should develop routines on a small scale with the full involvement of staff members. This strategy may also extend to an executive team appointing a sole lead administrator to foster prompt decisions. Executives should proceed by formulating a data governance framework. This framework stems from the significance of data to an establishment. There is no defined number of administrative levels that an establishment should adopt in this regard. However, data owners and data specialists are indispensable. A data governance program is incomplete without sufficient controls, thresholds, and indices. These are instructive in what data types an organization uses and processes. Given the inevitability of glitches, executives should also develop reporting tools to diagnose and resolve concerns as they arise. Tim Valihora is an expert in ensuring - that "PII" (Personally Identifiable Information) is utilized in a secure fashion. Data Masking and Data Encryption are among the key technologies and approaches that Mr. Valihora has utilized while providing end-to-end Data Governance solutions to over 100 large-scale corporations in Canada, the USA, Asia-Pacific and throughout Europe. Tim Valihora is a US based (Vero Beach, FL) Data Governance specialist within the IBM InfoSphere Information Server (IIS) software suite and is also an expert with respect to Collibra Data Governance Center and Collibra Data Quality (formerly OWL Data Quality.)
Career Highlights for Tim Valihora Include: • Technical Architecture, IIS installations, post-install-configuration, SDLC mentoring, ETL programming, performance-tuning, client-side training (including administrators, developers or business analysis) on all of the over 15 out-of-the-box IBM IIS (InfoSphere Information Server) products • Over 160 Successful IBM IIS installs - Including the GRID Tool-Kit for DataStage (GTK), MPP, SMP, Multiple-Engines, Clustered Xmeta, Clustered WAS, Active-Passive (Server) "Mirroring" and Oracle Real Application Clustered (RAC) “IADB” or “Xmeta” configurations • Extensive experience with creating realistic and achievable Disaster-Recovery (DR) for IBM IIS installations + Collibra Data Quality clusters • IBM MicroServices (MS) (built upon Red Hat Open-Shift (RHOS) and Kubernetes Technology) installations and administration including Information Governance Catalog (IGC) “New”, Information Analyzer (IA) “thin”, Watson Knowledge Catalog (WKC) and Enterprise Search (ES) – on IBM Cloud PAK for Data (CP4D) platforms or IIS v11.7.1.4 “on-prem” • Over 8000 DataStage and QualityStage ETL Jobs Coded • Address Certification (WAVES, CASS, SERP, Address Doctor, Experian QAS) • Real-Time coding and mentoring via IBM IIS Information Services Director (ISD) • IIS IGC Rest-API coding (including custom audit coding for what has changed within IGC recently…or training on the IGC rest-explorer API) • IGC “Classic” and IGC “New” – Data Lineage via Extension Mapping Documents or XML “Flow-Docs” • IBM Business Process Manager (BPM) for Custom Workflows (including Data Quality rules + IGC Glossary Publishing etc.) • Information Analyzer (IA) Data Rules (via IA or QualityStage – in batch or real-time) • IBM IIS Stewardship Center installation and Configuration (BPM) • Data Quality Exception Console (DQEC) setup and configuration • IGC Glossary Publishing Remediation Workflows (BPM, Stewardship Center, Subscription Manager)
In terms of hobbies - Tim Valihora - when not in the office - enjoys playing guitar (namely Jackson, Signature and Takamine), drums, squash, tennis, golf and riding his KTM 1290 Super Adventure "R", BMW 1250 GS Adventure and Ducati MultiStrada V4S motorcycles.
1 note · View note