Tutorial
7 min read

NiFi Ingestion Blog Series. Part VI - I only have one rule and that is … - recommendations for using Apache NiFi

Apache NiFi, a big data processing engine with graphical WebUI, was created to give non-programmers the ability to swiftly and codelessly create data pipelines and free them from those dirty, text-based methods of implementation. Unfortunately, we live in a world of trade-offs, and those features come with a price. The purpose of our blog series is to present our experience and lessons learned when working with production NiFi pipelines. This will be organised into the following articles:

getindata-nifi-ingestion

In this post we try to sit back, think of all the details presented in the previous articles and extract some general rules and lessons learned that may be useful to other data engineers. 

Harness the complexity 

Flow visualization is one of the greatest features of NiFi, usually when you create the flow, it’s basically self-explanatory. If we want to keep reaping the benefits of it, we must keep the structure of the flow fairly clear. Rules are mostly analogical to those used in writing code. It’s worth mentioning that what is considered a clear structure is subjective, so suggestions here will be more like rule of thumb, rather than a rigid framework, nonetheless here they are:

  • If your flow doesn’t fit on a visible canvas or you have to zoom out so you can see the names of processors, it’s probably time to put some parts of the flow in process groups. 
  • If your flow gets too deep so you have more than four nestings, it may be time to think about restructuring the flow or putting logic into scripts/custom processors.
  • If possible, avoid manipulating attributes that are widely used in the flow. An attribute once set is often used in many places and changing it in one place has a great chance of breaking logic in some steps later on. 
  • Keep as much as you can explicit. It's really tempting, especially while writing scripts, to just manipulate the attributes that are implicitly given, but if you have to debug where certain attributes change, it’s usually better to have it specified in processor properties than have to look for it in the script body. In case of custom processors, try keeping good documentation.
  • Sometimes your flow can have just too much logic inside. In this case, you might think about splitting it into a few flows and connect them with an external service like Kafka. A decrease in size makes the flow more readable and keeps the parts moderately separate, so you only have to debug specific parts. It is like eating spaghetti that has just a single noodle. The shorter the noodle is, the cleaner flow you have.

Choose the right tool

To avoid pitfalls while developing, it's important to remember that while NiFi is great for a variety of problems, for some it’s just… not. Developing some features in NiFi is just not feasible when they are available in other tools so it is a huge mistake (unfortunately happening more often than it should) to equate the size of your technological stack with the complexity of the solution. It is a core assumption built into the design of Nifi to integrate with other processing engines, databases, microservices etc. So even if most of the processing is in Nifi, it’s always good to ask yourself whether this is the right tool for this job.

If you want to do the stream processing with windowing or some logic, consider other technologies like Apache Flink. If you need batch processing on a Hadoop cluster, think of executing Hive queries from NiFi. If the processing cannot be defined with SQL, consider writing a separate Spark job for it. On the other hand, if one needs to manage files on HDFS or generate and run Hive queries, then NiFi is a really good choice. 

  • Choosing NiFi does not mean avoiding writing any code, but the amount of code that has to be written can be significantly reduced. 

Keep your finger on the pulse

The development in NiFi is based on using out-of-the-box processors, that makes the developers dependent on available solutions more than in classic development. We can of course create our own custom solutions by implementing the functionality with some flow, script or other custom approach, but it’s usually problematic maintenance-wise. In consequence, it’s vital to stay up to date with features added to new versions of NiFi. This happened to us when we needed a retry mechanism for communicating with 3rd party services like Hive, HDFS, etc. There was no available solution so we implemented a retry process group that has done what we needed. The only issue with this was that the process group contained eleven processors and was placed in multiple places in the flow, which resulted in around 250 extra processors. Fortunately, a couple of months later the RetryFlowFile processor was released and we upgraded Nifi to a newer version and used the available processor. 

The lessons learned are clear:

  • If you are solving a generic problem, sooner or later someone else will solve it too. 
  • Keep up to date with the NiFi change log to see what is coming with the latest releases. 

CI and CD takes time

From our experience, continuous integration and continuous deployment of NiFi projects are much more time consuming than other processing technologies. Depending on how sensitive the data is and how critical the process, there are a few options of handling it.

  • Develop directly on a single main NiFi cluster. Disclaimer: developing on production is generally a really bad idea. But in some cases it’s possible. You have one registry, versioning is easy.
  • Have a single NiFi registry and multiple NiFi clusters. If you can afford to have elements in production modifiable from a development environment, it can solve a lot of problems moving the flow.
  • If you can’t do any of the above… well good luck, there is a lot of work ahead of you. NiFi has no out-of-the-box support for migrating whole flows between environments. It is doable, but really hard if you want to have an automated process.

getindata-apache-nifi

Conclusion

This is the 6th post in our series and the last. We’ve seen certain comments saying that NiFi s**ks under some previous posts - which we don't agree with.  We are the engineers who have spent quite some time with  NiFi, so we write about the things that we had issues with and solved. The technology is not fully mature yet, it is still evolving. For many scenarios, the development of NiFi is lightning fast and is definitely, without any shadow of a doubt, the technology we recommend.

big data
technology
apache nifi
hadoop
hdfs
hive
kafka
2 December 2020

Want more? Check our articles

blogdzisssobszar roboczy 1 4
Tutorial

Deploying MLflow on the Google Cloud Platform using App Engine

MLOps platforms delivered by GetInData allow us to pick best of breed technologies to cover crucial functionalities. MLflow is one of the key…

Read more
dynamicsqlprocessingwithapacheflinkobszar roboczy 1 4
Tutorial

Dynamic SQL processing with Apache Flink

In this blog post, I would like to cover the hidden possibilities of dynamic SQL processing using the current Flink implementation. I will showcase a…

Read more
getindata how start big data project
Use-cases/Project

5 questions you need to answer before starting a big data project

For project managers, development teams and whole organizations, making the first step into the Big Data world might be a big challenge. In most cases…

Read more
kedro dynamic pipelinesobszar roboczy 1 4
Tutorial

Kedro Dynamic Pipelines

“How can I generate Kedro pipelines dynamically?” - is one of the most commonly asked questions on Kedro Slack. I’m a member of Kedro’s Technical…

Read more
llm cluster hugging face gke autopilot getindataobszar roboczy 1 4
Tutorial

Deploy open source LLM in your private cluster with Hugging Face and GKE Autopilot

Deploying Language Model (LLMs) based applications can present numerous challenges, particularly when it comes to privacy, reliability and ease of…

Read more
how we work with customer scrum framework dema project
Use-cases/Project

How do we work with customers? Scrum Framework in Dema project

Main Goals GetInData has successfully introduced the Scrum framework in cooperation with Dema. Thanks to the use of Scrum, the results of the…

Read more

Contact us

Interested in our solutions?
Contact us!

Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.


What did you find most impressive about GetInData?

They did a very good job in finding people that fitted in Acast both technically as well as culturally.
Type the form or send a e-mail: hello@getindata.com
The administrator of your personal data is GetInData Poland Sp. z o.o. with its registered seat in Warsaw (02-508), 39/20 Pulawska St. Your data is processed for the purpose of provision of electronic services in accordance with the Terms & Conditions. For more information on personal data processing and your rights please see Privacy Policy.

By submitting this form, you agree to our Terms & Conditions and Privacy Policy