Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Moving beyond Google: Why ChatGPT is the search engine of the future

Categories

Tags miscellaneous data-science big-data search learning

I was thrilled when my school announced its new 1-to-1 technology program in my first year of teaching, a decade ago. This announcement meant that each of our students would now have a school-issued laptop in the classroom. Not only was it a welcome transition from traditional paper-based learning, but it also meant that I would be relieved from my daily tussles with the copy machine. By Zak Cohen

Unfortunately, my excitement was short-lived.

The article then reads:

  • How are schools keeping up with digital literacy?
  • The fallacy of digital natives
  • Digital literacy for a post-google World: Beyond “google it”
  • Do students really know what is a credible source?
  • “ChatGPT it” – A digital literacy strategy for a better learner experience
  • Getting started: Using ChatGPT as a digital literacy strategy

Google has undoubtedly transformed the landscape of education, however, its influence has been imposed on our classrooms without much consideration for alternative solutions. Now, with the advent of advanced AI technologies such as ChatGPT, we are presented with an opportunity to take control of the technology that is being used in our classrooms. Given the advantages of ChatGPT, is it still justifiable to use Google in the classroom? The future of education is not set in stone and by taking control of the technology we use in our classrooms, we have the power to shape it for the better. Interesing read!

[Read More]

Organoid intelligence: Computing on the brain

Categories

Tags miscellaneous data-science big-data startups

Small spheres of neurons show promise for drug testing and computation. In parallel with recent developments in machine learning like GPT-4, a group of scientists has recently proposed the use of neural tissue itself, carefully grown to re-create the structures of the animal brain, as a computational substrate. By Michael Nolan.

Gathering developments from the fields of computer science, electrical engineering, neurobiology, electrophysiology, and pharmacology, the authors propose a new research initiative they call “organoid intelligence.”

The development of organoids has been made possible by two bioengineering breakthroughs: induced pluripotent stem cells and 3D-cell-culturing techniques.

Organoids typically measure 500 micrometers in diameter—roughly the thickness of your fingernail. As organoids develop, the researchers say, their constituent neurons begin to interconnect in networks and patterns of activity that mimic the structures of different brain regions. The development of the organoids field has been made possible by two bioengineering breakthroughs: induced pluripotent stem cells (IPSCs) and 3D-cell-culturing techniques. Super interesting read!

[Read More]

Simplified data pipelines with Pulsar transformation functions

Categories

Tags app-development data-science apache big-data

They provide a low-code way to develop basic processing and routing of data using existing Pulsar features. Using functions in the cloud is a very efficient way of creating iterable workflows that can transform data, analyze source code, make platform configurations, and do many other useful jobs. As you develop a function you will quickly realize a need for a solid foundation of utilities and formatting. By Christophe Bornet.

This piece discusses:

  • About transformation functions
  • Function operations
  • Example configuration
  • Transformation function compute operation
  • Taking transformation functions further
  • Deploying the functions on Astra Streaming
  • Getting started with transformation functions

Similar to designing microservices, functions have boilerplate code and need standardized processes, and writing the boilerplate code can feel like valuable time spent on a seemingly mindless task. You bring value to a project by creating its core logic, not by creating JSON-parsing methods. You will code examples and links to further reading as well in this article. Good read!

[Read More]

Cilium Mesh – One mesh to connect them all

Categories

Tags app-development devops kubernetes containers infosec

Cilium has rapidly become the standard in Kubernetes networking thanks due to its advanced security, performance, and exceptional scalability. With the increase in the adoption of Cilium, more and more customers have requested to bring Cilium to the world of virtual machines and servers. By Thomas Graf.

Cilium Mesh connects Kubernetes workloads, virtual machines, and physical servers running in the cloud, on-premises, or at the edge. It is a natural evolution of Cilium. It builds on the strong Kubernetes networking foundation with identity-based security and deep observability and combines it with the highly scalable multi-cluster control plane Cilium Cluster Mesh.

Further in this announcement:

  • What is Cilium Mesh?
  • Why Cilium Mesh?
  • How do I configure Cilium Mesh?
  • Observability across infrastructure

Cilium provides extensive observability capabilities including the ability to stream observability data to Hubble UI, Prometheus & Grafana, and most SIEMs. This capability extends to Cilium Mesh to provide visibility into all workloads across cloud and on-prem infrastructure. Interesting read!

[Read More]

Clean code

Categories

Tags code-refactoring app-development programming performance

Clean code can be read and enhanced by a developer other than its original author. By Moises Gamio. This kind of practice Robert C Martin introduced it. If you want to be a better programmer, you must follow these recommendations. The article then explains:

  • Clean code has intention-revealing names
  • Clean code tells a story
  • Functions should do one thing
  • Don’t comment bad code, rewrite it
  • Choose simplicity over complexity
  • Avoid hard coding
  • Name your variables according to the context
  • Clean Code separates levels of detail
  • Clean Code needs a few comments
  • Clean Code has small methods

Video and code examples for each principle are attached, too. Nice one!

[Read More]

How we achieved a 6-fold increase in Podman startup speed

Categories

Tags ibm devops linux cloud performance servers

By cutting unnecessary processes, you can realize near-real-time container startup, critical in cars and other time-sensitive applications. By Dan Walsh (Red Hat), Alexander Larsson (Red Hat), Pierre-Yves Chibon (Red Hat).

One of the cornerstones of Red Hat In-Vehicle Operating System (RHIVOS) is using systemd to manage the life cycle of containers created by Podman. During Podman’s development, as with most container engines, the speed requirements were mainly around pulling container images. The article then describes how authors achieved performance improvements:

  • Satisfy the need for speed
  • Catch the details
  • Compile regular expressions with Go
  • Drop virtual networks
  • Use crun improvements
  • Precompile seccomp
  • Execute programs during initialization
  • Work around kernel issues
  • Use transient storage

Authors continue to work on finding and fixing performance issues in container startup in Podman. At this point, they have successfully improved it from around 2 seconds on the Raspberry Pi to under 0.3 seconds, providing a 6-fold increase in speed. Interesting read!

[Read More]

Igniting IBM Ecosystem with New IBM z16 and IBM LinuxONE 4 single frame and rack mount options

Categories

Tags ibm big-data linux cloud cio

At IBM, we obsess over what our clients need—not just to meet today’s business challenges, but what will help them capitalize on future opportunities. By Tina Tarquinio, VP, Product Management, IBM Z and LinuxONE.

The new offerings are built for the modern data center to help optimize flexibility and sustainability, particularly for small- and medium-sized businesses. For IBM Ecosystem partners, the new rack mount configurations open more business opportunities to reach new audiences with specific data center design requirements. Consolidating Linux workloads on an IBM LinuxONE Rockhopper 4 instead of running them on compared x86 servers with similar conditions and location can reduce energy consumption by 75% and space by 67%.

Today’s IT leaders are tasked with not only solving challenging technical problems, but also driving key business outcomes. Digitally transforming operations and reshaping customer experiences requires deep integration across the full stack of technology and business priorities.

IBM provides resources and skills for clients and partners who want to learn more about the new single frame and rack mount configurations and IBM LinuxONE Expert Care. To learn more about the new configurations from industry experts, please attend the upcoming virtual events on April 4 and April 17, 2023. Read the official press release to learn more. Interesting read!

[Read More]

The greater use of cloud computing for financial services

Categories

Tags miscellaneous fintech blockchain cloud cio

For good reason, the financial services industry is quickly adopting cloud computing technology. Cloud computing provides numerous advantages, including cost savings, scalability, agility, and increased security. By Finance Magnates Staff.

The article explains:

  • Cloud computing’s advantages in financial services
  • Cloud computing is becoming increasingly popular in financial services
  • The challenges of cloud computing in financial services
  • The dangers of over relying on cloud computing

The financial services business is being transformed by cloud computing, which offers cost reductions, scalability, agility, and increased security. Financial services firms are increasingly embracing cloud computing technology, such as IaaS, PaaS, and SaaS, to store applications and data on the cloud, develop new apps and services, and access a variety of software applications. Also links to further reading concerning fintech cybersecruity or digital banking trends provided in the blog. Nice one!

[Read More]

Comparing Avro vs Protobuf for data serialization

Categories

Tags json queues messaging app-development streaming apache

Data serialization is a crucial aspect of modern distributed systems because it enables the efficient communication and storage of structured data. In this article, we will discuss two popular serialization formats: Avro and Protocol Buffers, Protobuf for short, and compare their strengths and weaknesses to help you make an informed decision about which one to use in your projects. By Daniel Selans.

Avro is a serialization framework developed by the Apache Software Foundation. It is designed to be language-independent and schema-based, which means that data is serialized and deserialized using a schema that describes the structure of the data. Avro schemas are defined in JSON.

Author then deals with following topics:

  • What is data serialization, and why do you need it?
  • What is Avro?
  • What is Protobuf?
  • Avro vs. Protobuf: Strengths and weaknesses

Regardless of the encodings used with streaming or event-driven data, being able to observe, monitor, and act on data is paramount for compliance, reliability, and shipping complex features faster. While Streamdal has first-class support for Protobuf, we work with any encoding to enable total observability and operability on any data stream or event-driven system. Good read!

[Read More]

Using Vulcan codecs with Kafka Java APIs

Categories

Tags apache java messaging app-development streaming scala

For those that aren’t familiar, Vulcan is a functional Avro encoding library that uses the official Apache Avro library under the hood. The difference between this and the official Avro build plugins approach is that the types are defined in plain Scala. Then the Avro schema is generated from those instead of defining the Avro schema and getting code generated at compile time that adheres to that schema. By César Enrique.

import vulcan.Codec
import vulcan.generic.*
import java.time.Instant
import java.util.UUID

case class Data(id: UUID, timestamp: Instant, value: String)

object Data:
    given Codec[Data] = Codec.derive[Data]

The obvious benefit for Scala development is that dealing with Java types is no longer needed. Further in the article:

  • Vulcan example
  • Codec.encode and Codec.decode
  • Serializers and deserializers
  • Implementing a Serde
  • Caveats

To wrap up, we have explored the functionalities provided by Vulcan and implemented a way to integrate it with Kafka APIs that expect Serializer and Deserializer since Vulcan does not support this out of the box.

[Read More]