Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Introduction to distributed NoSQL databases

Categories

Tags cloud aws database nosql performance

Distributed NoSQL databases are designed to handle large datasets efficiently. They are distributed across multiple servers, which allows them to scale horizontally. NoSQL databases are more flexible than traditional relational databases, which can make them a better choice for applications with changing data needs. By Alex Patino.

How do NoSQL databases compare to traditional databases? NoSQL databases are more flexible than traditional relational databases. They can handle unstructured data, while traditional databases are designed for structured data. NoSQL databases can scale horizontally, while traditional databases typically scale vertically.

NoSQL databases can be used in a variety of applications, including:

  • Real-time analytics
  • IoT
  • Social media
  • Financial services
  • E-commerce

Aerospike is a high-performance, distributed NoSQL database that is designed to handle large amounts of real-time data with low latency. It is more scalable and cost-effective than other NoSQL databases. It supports both strong consistency and eventual consistency, which makes it a good choice for a variety of applications. Interesting read!

[Read More]

Advanced SQL techniques to transform data analysis

Categories

Tags cloud big-data database mysql data-science

This article covers the proactive way of presenting data analysis by using advanced SQL techniques and offers a step-by-step approach to improving the speed of your queries and their accuracy. By dasca.org.

You will learn:

  • Building and configuring your SQL environment for data analysis
  • Essential SQL techniques for data analysis
  • Advanced SQL techniques for complex analysis
  • Optimizing SQL queries for large datasets
  • Working with time series data in SQL
  • Data visualization with SQL and integration tools
  • SQL for predictive analytics & data modeling

Learning SQL for data analytics is mandatory for anyone who wants to get a better grasp of the job in the field of data analysis. Because of its high proficiency in commanding, interrogating, and combining data, it forms the basis of sound decision-making based on data. As Industries continue to require massive data for their processes, SQL’s importance will continue to rise. Good read!

[Read More]

Write queries faster with Amazon Q generative SQL for Amazon Redshift

Categories

Tags cloud aws database mysql

Amazon Redshift is a fully managed, AI-powered cloud data warehouse that delivers the best price-performance for your analytics workloads at any scale. Amazon Q generative SQL brings the capabilities of generative AI directly into the Amazon Redshift query editor. Amazon Q generative SQL for Amazon Redshift was launched in preview during AWS re:Invent 2023. With over 85,000 queries executed in preview, Amazon Redshift announced the general availability in September 2024. By Raghu Kuppala, Phil Bates, Xiao Qin, Erol Murtezaoglu, and Sushmita Barthakur.

Amazon Q generative SQL for Amazon Redshift uses generative AI to analyze user intent, query patterns, and schema metadata to identify common SQL query patterns directly within Amazon Redshift, accelerating the query authoring process for users and reducing the time required to derive actionable data insights

At a high level, the feature works as follows:

  • For generating the SQL code, you can write your query request in plain English within the conversational interface in the Redshift query editor
  • The query editor sends the query context to the underlying Amazon Q generative SQL platform, which uses generative AI to generate SQL code recommendations based on your Redshift metadata
  • You receive the generated SQL code suggestions within the same chat interface

Within this feature, user data is secure and private. Your data is not shared across accounts. Your queries, data and database schemas are not used to train a generative AI foundational model (FM). Your input is used as contextual prompts to the FM to answer only your queries. Follow the link to the full article to learn how it works!

[Read More]

MongooseIm 6.3: Prometheus, CockroachDB and more

Categories

Tags database app-development cio messaging performance erlang

MongooseIM is a scalable, efficient, high-performance instant messaging server using the proven, open, and extensible XMPP protocol. With each new version, we introduce new features and improvements. By Pawel Chrzaszcz.

MongooseIM 6.3 has been released. It is an open source, scalable, and efficient XMPP (Extensible Messaging and Presence Protocol) server. MongooseIM 6.3 introduces a new instrumentation layer that supports the Prometheus monitoring system. This allows for easier configuration, extensibility, and the ability to add more handlers in the future. Additionally, MongooseIM 6.3 can now use CockroachDB to store all of its persistent data. CockroachDB is a scalable, distributed SQL database that can be used to build globally distributed applications.

MongooseIM 6.3 also includes a number of other improvements and updates. These include:

  • A new CETS in-memory storage engine that is faster and more efficient than the previous version.
  • Improved support for IPv6.
  • A new module for generating self-signed certificates.
  • A number of bug fixes and performance improvements.

MongooseIM 6.3 is available for download from the MongooseIM website. MongooseIM 6.3.0 opens new possibilities for observability – the Prometheus protocol is supported instantly with a new reworked instrumentation layer underneath, guaranteeing ease of future extensions. Regarding database integration, you can now use CockroachDB to store all your persistent data. Nice one!

[Read More]

How AI surged Google Cloud's revenue growth

Categories

Tags cio ai gcp cloud performance

Google Cloud’s revenue growth driven by AI capabilities, outpaces expectations and positions it amongst AWS and Microsoft Azure in global cloud evolution. By Kitty Wheeler.

The momentum across the company is extraordinary. Our commitment to innovation, as well as our long-term focus and investment in AI, are paying off with consumers and partners benefiting from our AI tools.

Chief Executive Officer of Google, Sundar Pichai

Technology companies vying for dominance across the world is no secret, especially in a market that is increasingly critical to global business operations. The integration of AI capabilities into these cloud services has further accelerated growth and innovation in the sector. In this context, the cloud computing market has been dominated by three main players: Amazon Web Services (AWS), Microsoft Azure and Google Cloud.

Key benefits of Google’s Dialogflow Enterprise Edition include:

* Businesses to build communication tools like chatbots for websites and messaging applications
* Uses a machine learning foundation
* Allows chatbots to better recognize context and intent during conversations with users
* Provides more accurate and natural responses

Alphabet, like its competitors, is additionally investing heavily in AI and cloud infrastructure. The company has announced plans to spend billions on opening data centres worldwide to support its cloud and AI initiatives. Google has also integrated its generative AI chatbot, Gemini, into its cloud services, offering features such as AI-driven code generation, data processing and cybersecurity threat analysis. Good read!

[Read More]

Democratizing AI accelerators and GPU kernel programming using Triton

Categories

Tags ai cloud devops performance miscellaneous software

Triton is a language and compiler for parallel programming. Specifically it is currently a Python-based DSL (Domain Specific Language) along with associated tooling, that enables the writing of efficient custom compute kernels used for implementing DNNs (Deep Neural Networks) and LLMs (Large Language Models), especially when executed on AI accelerators such as GP. By Sanjeev Rampal.

Triton is a DSL for writing GPU kernels that aren’t tied to any one vendor. Additionally, it is architected to have multiple layers of compilers that automate the optimization and tuning needed to address the memory vs compute throughput tradeoff noted above without requiring the kernel developer to do so. This is the primary value of Triton. Using this combination of device-independent front-end compilation and device-dependent backend compilation, it is able to generate near-optimal code for multiple hardware targets ranging from existing accelerators to upcoming accelerator families from Intel, Qualcomm, Meta, Microsoft and more without requiring the kernel developer to know details of or design optimization strategies for each hardware architecture separately.

Triton is an important initiative in the move towards democratizing the use and programming of AI accelerators such as GPUs for Deep Neural Networks. In this article we shared some foundational concepts around this project. In future articles, we will dive into additional Triton details and illustrate its use in enterprise AI platforms from Red Hat. Nice one!

[Read More]

Laying the foundation for a career in platform engineering

Categories

Tags career cloud gcp teams google devops

Imagine that you’re an engineer at the company Acme Corp and you’ve been tasked with some big projects: integrating and delivering software using CI/CD and automation, as well as implementing data-driven metrics and observability tools. But many of your fellow engineers are struggling because there’s too much cognitive load — think deploying and automating Kubernetes clusters, configuring CI/CD pipelines, and worrying about security. By Darren Evans and Yuriy Babenko.

Platform engineering is a practice that helps companies deliver software and services more efficiently by providing a platform for developers to use. Platform engineers are responsible for building and maintaining this platform, as well as providing support to developers.

The document deep dives into:

  • Common attributes of a platform engineer
  • The design loop and the significance of customer focus
  • What does a platform engineer actually do?
  • What platform engineers should avoid?
  • Platform engineers are the backbone of modern software delivery

If platforms are first and foremost a product, as the CNCF Platforms White Paper suggests, the focus is on its users. From the Google DORA Research 2023 we know that user focus is key: “Teams that focus on the user have 40% higher organizational performance than teams that don’t.” For example, you might decide to adopt Google’s HEART (Happiness, Engagement, Adoption, Retention, Task Success) framework. Follow the link to the full article to get access to further reading, links and whitepapers. Excellent read!

[Read More]

Elixir in production: What is it used for?

Categories

Tags elixir web-development functional-programming how-to app-development erlang

There are many success stories out there about using Elixir in production that not only prove that the language is mature enough to be a solid choice, but it can be even more effective than the usual frequently used languages and frameworks thanks to the features provided by BEAM and OTP. By RisingStack Engineering.

From startups to established enterprises, our examples clearly outline Elixir’s strengths:

  • Scalability – Effortlessly handles sudden surges in traffic and data.
  • Fault Tolerance – Maintains stability and uptime even during system failures.
  • Cost Efficiency – Reduces infrastructure needs.

The article then also mentions companies using Elixir in production and the reasons behind it:

  • Incredible developer productivity with Elixir at Remote.com
  • “It just works” – Elixir by accident at Multiverse
  • Less servers, same performance at Pinterest
  • How two elixir nodes outperformed 20 Ruby nodes by 83x at Veeps.com
  • Elixir powers emerging Markets – Literally as SparkMeter.io
  • Multiplayer with Elixir: 10000 players in the same session
  • How PepsiCo uses Elixir
  • 4 Billion messages every day on Discord with Elixir

… and more. Follow the link to article to get details about case studies mentioned above. As seen from these case studies, most companies needed not only scalability but also ease of maintenance and future-proofing – Elixir was able to provide all of these, thus proving its maturity for an environment that is more than ready for use in production. Nice one!

[Read More]

How JavaScript signals are changing everyday development

Categories

Tags javascript web-development how-to app-development nodejs

In recent times, JavaScript and signals have gained attention as a powerful new tool for managing reactive states. But how did that come about? By Hrvoje D. In this blog post, we’ll dive into what signals are, explore how this “new” approach to development really works, and compare it to previous state management solutions.

Signals are basic data units that can automatically alert functions or computations when the data they hold changes. So signals can work two ways, they can receive data and they can transmit data down the line. There are plenty of similar examples being used in different frameworks, which all have their little changes in implementation, but in fact are used to get the same thing in the end.

The article mentions examples of similar signal usages in JavaScript:

  • React hooks comparison
  • Angular RxJS comparison
  • Simpler state management
  • Signals in the JavaScript ecosystem

The JavaScript Signals proposal is an initiative by TC39 to establish a standard for managing reactive states across JavaScript applications. While JavaScript previously introduced a standard for promises, this proposal differs by focusing on a foundational reactive model that frameworks can adopt, facilitating interoperability across libraries like React, Angular, and Vue. By focusing on features like automatic dependency tracking, lazy evaluation, and memoization, Signals aim to simplify functional reactive programming and enable efficient, glitch-free state updates. Good read!

[Read More]

How Kubernetes requests and limits really work

Categories

Tags devops agile cicd app-development kubernetes containers

Kubernetes is inarguably an elegant, refined, well-designed edifice of open source enterprise software. It is known. Even so, the internal machinations of this mighty platform tool are shrouded in mystery. Friendly abstractions, like “resource requests” for CPU and memory, hide from view a host of interrelated processes — precise and polished scheduling algorithms, clever transformations of friendly abstractions into arcane kernel features, a perhaps unsurprising amount of math — all conjoining to produce the working manifestations of a user’s expressed intent. By Reid Vandewiele.

By the time you reach the end of this article, you will learn:

  • Big picture view: Layers in the looking glass
    • Pod spec (kube-api)
    • Node status (kubelet)
    • Container configuration for CPU (container runtime)
    • Container configuration for memory (container runtime)
    • Node pressure and eviction (kubelet)

A node becomes “full” and unable to accept additional workloads based on resource requests. The actual CPU or memory used on the node doesn’t matter in deciding whether the node can handle more pods. If you want a node being “full” to mean its actual CPU and memory resources are being used efficiently, you need to make sure CPU and memory requests match up with actual usage. Interesting read!

[Read More]