Tutorial
4 min read

Flink with a Metadata Catalog

Have you worked with Flink SQL or Flink Table API? Do you find it frustrating to manage sources and sinks across different projects or repositories, along with all the properties of the tables? How do you prevent duplication of table definitions? While platforms like Ververica offer metadata catalogs, what if you don't have access to them?

Here's a viable alternative: the reliable old Hive Metastore Service (HMS). I will walk you through the entire process and demonstrate how to seamlessly set up and work with Flink and HMS. If you're interested, read on!

Setup HMS

The Hive Metastore is a standalone service designed to manage metadata, including table definitions, partitions, properties and statistics. While it's well-known in the Spark community, it can also be valuable for Flink developers.

By default, the Hive Metastore uses Apache Derby as its storage backend, but it's possible to configure it to use any JDBC-compatible database. For this example, I've opted to use PostgresDB.

To install the Hive Metastore, follow these steps:

  • Install the Hadoop package.
  • Install the standalone Hive Metastore.
  • Install the JDBC driver for your chosen database.

Configuration is provided via the hive-site.xml and metastore-site.xml files. To start the Hive Metastore, execute the following command:

/opt/hive-metastore/bin/schematool -dbType postgres

You can use the -validate flag to validate the database schema and -initSchema if it's the first time you're running it.

The full example as a docker image is available here. You can deploy it using a helm chart, published here (remember to set up Postgres db).

Flink dependencies

Flink and Hadoop always lead to dependency conflicts. This problem can be solved by shadowing packages.

I found this worked by adding the following to Flink libraries:

The next step is the hive-site.xml configuration file. I created it in the /home/maciej/hive directory.

<configuration>
    <property>
        <name>hive.metastore.uris</name>
        <value>thrift://hive-metastore.hms.svc.cluster.local:9083</value>
        <description>IP address (or fully-qualified domain name) and port of the metastore host</description>
    </property>


    <property>
        <name>hive.metastore.schema.verification</name>
        <value>true</value>
    </property>
</configuration>

Flink Table API/SQL

To use HMS, you need to define your catalog in Flink (pointing to a directory with hive-site.xml) and set it:

CREATE CATALOG hms_example WITH (
    'type' = 'hive',
    -- 'default-database' = 'default',
    'hive-conf-dir' = '/home/maciej/hive'
);
USE CATALOG hms_example;

Now you have access to all of the table’s definitions (e.g. list them by SHOW TABLES). If you create a new table it will be added to HMS and be available in the other Flink jobs.

/** JOB 1 */
CREATE TABLE table_a (
    id INT,
    description STRING
) with (
    'connector'='datagen'
)
/** JOB 2 */
SHOW TABLES
/** JOB 2 */
SELECT * FROM table_a LIMIT 5

Security

The hive server supports multiple security mechanisms to control access to data. It can be integrated with Kerberos or LDAP to provide Single Sign-On (SSO) authentication. Access to tables can be defined using Storage-Based Authorization (access to databases, tables and partitions based on the privileges defined in the storage), SQL Standards-Based Authorization (fine-grained access granted via SQL) or Apache Ranger.

SQL-based, Ranger, or Hive's default authorization model will be enforced during the query compilation time by the Hive server. However, this is outside the control of HMS and therefore cannot be used with the Flink framework.

Only Storage-Based Authorization falls under HMS's responsibility and could be used in conjunction with Flink, although the extent of its usefulness is limited.

HMS is built on top of an RDBMS. It's possible to create users with different rights, such as one with read-write (RW) access and another with read-only (RO) access. This way, you can protect your metadata from accidental modifications.

Please note that all of the table’s properties are accessible to all jobs, and any credentials can be easily read. Granting access to HMS grants access to all underlying passwords.

That’s all

I hope that you find it easy to set up and use HMS with Flink. It’s beneficial for managing the tables’ definitions between Flink jobs, projects, or repositories, separating the configuration from SQL code. Need help with Flink? Sign up for a free consultation with one of our experts.

streaming
apache flink
flink
flink sql
13 August 2024

Want more? Check our articles

dsc3210
Big Data Event

A Review of the Big Data Technology Warsaw Summit 2022! Part 2. Top 3 best-rated presentations

The 8th edition of the Big Data Tech Summit left us wondering about the trends and changes in Big Data, which clearly resonated in many presentations…

Read more
getindator stream of data showing real time analytics in busine 68956ccf d535 47c5 aa87 1b0106a634dc
Tech News

The Evolution of Real-Time Data Streaming in Business

This blog post is based on a webinar:”Real-Time Data to Drive Business Growth and Innovation in 2024” that was held by CTO Krzysztof Zarzycki at…

Read more
screenshot 2022 08 02 at 10.56.56
Tech News

2022 Big Data Trends: Retail and eCommerce become one of the hottest sectors for AI/ML

Nowadays, we can see that AI/ML is visible everywhere, including advertising, healthcare, education, finance, automotive, public transport…

Read more
getindata blog big data flink data capture jdbc flinksql
Tutorial

Change Data Capture by JDBC with FlinkSQL

These days, Big Data and Business Intelligence platforms are one of the fastest-growing areas of computer science. Companies want to extract knowledge…

Read more
radiodatawilla
Radio DaTa Podcast

Data Journey with Arunabh Singh (Willa) – Building robust ML & Analytics capability very early with FinTech, skills & competencies for data scientists with ML/AI predictions for the next decades.

In this episode of the RadioData Podcast, Adama Kawa talks with Arunabh Singh about Willa use cases (​ FinTech): the most important ML models…

Read more
getindata data democratization 2

Data Democratization: Power Your Organizations with Data Accessibility

In today's digital age, data reigns supreme as the lifeblood of organizations across industries. From enabling informed decision-making to driving…

Read more

Contact us

Interested in our solutions?
Contact us!

Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.


What did you find most impressive about GetInData?

They did a very good job in finding people that fitted in Acast both technically as well as culturally.
Type the form or send a e-mail: hello@getindata.com
The administrator of your personal data is GetInData Poland Sp. z o.o. with its registered seat in Warsaw (02-508), 39/20 Pulawska St. Your data is processed for the purpose of provision of electronic services in accordance with the Terms & Conditions. For more information on personal data processing and your rights please see Privacy Policy.

By submitting this form, you agree to our Terms & Conditions and Privacy Policy