Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Running SQL server on Raspberry Pi using Docker

Categories

Tags iot programming azure robotics app-development database

The article walks through the process of installing Docker on Raspberry Pi and running Microsoft SQL Server using Docker1. It highlights the compatibility challenges between SQL Server and Raspberry Pi’s ARM-compatible CPU and explains how Docker resolves this issue. By Manjiri Gaikwad.

The article explains the prerequisites for running SQL Server on Raspberry Pi, including basic SQL knowledge and familiarity with SQL Server. It then provides an overview of Raspberry Pi, its features, and how it works. Raspberry Pi is a credit card-sized computer that can plug into a monitor or TV and is capable of performing various tasks. Docker allows users to deploy applications in containers, making it easier to manage and run software across different environments.

Further in the article:

  • It highlights the features of Raspberry Pi and how it can be used to build real-world applications
  • Docker, an open-source software framework for containerizing applications, is also introduced in the article
  • The article walks through the process of installing Docker on Raspberry Pi and running Microsoft SQL Server using Docker
  • It addresses the compatibility challenges between SQL Server and Raspberry Pi’s ARM-compatible CPU and explains how Docker resolves this issue

In conclusion, the guide provides a comprehensive guide for running Microsoft SQL Server on a Raspberry Pi using Docker. It covers the prerequisites, introduces Raspberry Pi and Docker, and explains the steps to install and run SQL Server on Raspberry Pi. Good read!

[Read More]

Distributed computing system models

Categories

Tags distributed programming learning app-development software-architecture

Distributed computing refers to a system where processing and data storage are distributed across multiple devices or systems, rather than being handled by a single central device. By @geeksforgeeks.org.

The article main sections:

  • Physical model
  • Architectural model
    • Client-Server model
    • Peer-to-peer model
    • Layered model
    • Micro-services model
  • Fundamental models
    • Interaction model
    • Remote Procedure Call (RPC)
    • Failure model
    • Security model

Failure Model – This model addresses the faults and failures that occur in the distributed computing system. It provides a framework to identify and rectify the faults that occur or may occur in the system. Fault tolerance mechanisms are implemented so as to handle failures by replication and error detection and recovery methods. Excellent read for anybody interested in distributed systems!

[Read More]

Notes on teaching Test Driven Development

Categories

Tags tdd programming learning app-development software

Notes from interesting exercise where author was helping a client learn how to apply Test Driven Development and developer testing from scratch. The developer in question was very inquisitive and trying hard to understand how best to apply testing and even a little TDD. By @jeremydmiller.

The notes summary:

  • The purpose of an automated test suite is to help you know when it’s safe to ship code and provide an effective feedback loop that helps you modify code.
  • Test Driven Development (TDD) is primarily a low-level design technique and an important feedback loop for coding.
  • When applying TDD, consider how you’ll test your code upfront as an input to how the code is going to be written in the first place.
  • Approach any bigger development task by first trying to pick out the individual tasks or responsibilities within the larger user story.
  • Focus on isolating validation logic into its own function where you can easily test inputs and do simple assertions against the expected state.

What author absolutely did tell his client was to try to approach any bigger development task by first trying to pick out the individual tasks or responsibilities within the larger user story. In the end, you want to be quick enough with your testing and coding mechanics that your progress is only limited by how fast you can think. Nice one!

[Read More]

Streams in Scala - introductory guide

Categories

Tags akka scala programming learning streaming queues

Streams in Scala provide a lazy evaluation mechanism where elements are computed on-demand rather than being eagerly evaluated and stored in memory. This allows for efficient memory utilization, especially when dealing with large datasets or potentially infinite sequences of data. By Aniefiok Akpan.

There are many reasons for using a stream-processing approach when writing software. In this blog post I’m going to focus on just one of those reasons: Memory Management.

The article covers the following topics:

  • Why Streams
  • Scala Stream
  • Call-by-name ( CBN )
  • A Simple use-case of Scala Stream
  • Alternative Libraries that implement Streams
// With LazyList, the content of the files is not loaded into memory
files.map {
 case (file) =>
  Source.fromFile(file).getLines().to(LazyList)
  .filter(_.contains("Success"))
  .take(10)
}

The author highlights the advantages of processing elements one at a time and retaining only the necessary elements in memory. With streams, you can confidently tackle memory-intensive tasks, knowing that the memory footprint is optimized, leading to more stable and scalable applications.There is also GitHub repo provided showcasing stream use. Good read!

[Read More]

What is the difference between tech debt and code debt?

Categories

Tags management cio learning agile

The article explains the difference between code debt and technical debt, two concepts that are often used in software development. Code debt refers to the potential cost that developers incur when they take shortcuts or implement quick fixes during the coding process, such as hard coding values, duplicate coding, or using deprecated libraries. By Sofia Jürgenson.

The article also discusses:

  • Understanding code debt
  • Decoding technical debt
  • How code debt and technical debt
  • Addressing code debt and technical debt:
    • Prioritize regular code refactoring
    • Adopt agile methodologies
    • Incorporate debt into the definition of done
    • Implement automated testing and continuous integration
    • Document everything
  • Code debt and technical debt management with no-code platforms

Good documentation is vital for managing technical debt. It forms a knowledge base that provides understanding about the system, making it easier to maintain and upgrade existing functionalities and technologies. Good read!

[Read More]

Measuring trends in Artificial Intelligence

Categories

Tags ai data-science cio learning big-data

The AI Index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. The annual report tracks, collates, distills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind. By stanford.edu.

The latest edition includes data from a broad set of academic, private, and nonprofit organizations as well as more self-collected data and original analysis than any previous editions, including an expanded technical performance chapter, a new survey of robotics researchers around the world, data on global AI legislation records in 25 countries, and a new chapter with an in-depth analysis of technical AI ethics metrics. The 2022 AI Index Report is split into five chapters:

  • Research and development
  • Technical performance
  • Technical AI ethics
  • The economy and education
  • AI policy and governance

Despite rising geopolitical tensions, the United States and China had the greatest number of cross-country collaborations in AI publications from 2010 to 2021, increasing five times since 2010. The collaboration between the two countries produced 2.7 times more publications than between the United Kingdom and China—the second highest on the list. Very interesting!

[Read More]

Artificial intelligence is a very real data center problem

Categories

Tags ai data-science cio database big-data

It would be stupid for us to not consider the consequences as tools like Open AI’s ChatGPT or Google’s Bard for example to proliferate and introduce machine intelligence to everyday people. That includes how our data centers are evolving amid the rapid growth in data that needs to be stored, processed, managed, and transferred. By Dr. Michael Lebby.

AI could be the Achilles heel for the data centers unable to evolve in the face of massive datasets required for AI. The artticle the focuses on:

  • From the Agora to hyper connected global markets: the rise of AI and modulators
  • Survival by the numbers: measuring the strain of AI
  • Avoiding data traffic jams
  • Gauging the impact of AI
  • Alleviating data center strain

If we look at the growth of computing power in high computational processing systems over the past 60 years we know that this growth has initially increased or doubled every 3-5years. Then from about 2020 onwards, the growth has increased by over an order of magnitude, or 10X, to a doubling of computational power every 3-4 months (in terms of petaflops - which is a metric for computational processing magnitude).

While AI is expected to grow in maturity and acceleration in popularity, the impact on data centers is serious and will impart an incredible level of strain on the future of the data center architecture. Five negative impacts have been outlined in this article, with one alleviation being the implementation and design of very high-performance polymer optical modulators, which have already demonstrated a capability to modulator light faster, reduce power consumption and be available in a tiny footprint the size of a grain of salt. Good read!

[Read More]

Discussing PostgreSQL: What changes in version 16

Categories

Tags mysql json cio database web-development

The article discusses the new features and improvements in PostgreSQL 16, the latest version of the open source relational database. By Amit Kapila.

The article covers the following topics:

  • The performance enhancements in query execution, bulk loading, and logical replication, which include more query parallelism, CPU acceleration, and load balancing.
  • The new SQL/JSON syntax, which allows users to query and manipulate JSON data using various operators and functions.
  • The new access control rules for managing policies across large fleets of PostgreSQL instances, which enable users to define fine-grained permissions for different roles and contexts.

The article also shares some insights on the future plans and directions for PostgreSQL development, such as supporting more languages and frameworks, enhancing security and reliability, and improving documentation and community engagement. Nice one!

[Read More]

Working with Postgres JSON query

Categories

Tags mysql json microservices database web-development

The article explains how to work with Postgres JSON Query, which is a feature that allows you to store and query JSON data in PostgreSQL. By Pratibha Sarin.

It covers the following topics:

  • What is JSON data and why store it in PostgreSQL
  • What are the differences between JSON and JSONB data types
  • What are the advantages of Postgres JSON Query
  • How to create, insert, query, and manipulate JSON data using various operators and functions
  • How to work with Postgres JSONB Query, which is a more advanced version of JSON Query that supports nested arrays and objects

You can have the best of both worlds by storing & querying JSON/JSONB data in your PostgreSQL tables. Postgres JSON Query offers you the adaptability and effectiveness of a NoSQL database combined with all the advantages of a relational database. Good read!

[Read More]

How to use MailHog to test emails locally (step-by-step guide)

Categories

Tags tdd programming microservices agile web-development

MailHog is an open source email testing tool that allows developers to test their email sending and receiving capabilities more efficiently. It is a lightweight and easy-to-use tool that can be run on multiple operating systems, including Windows, Linux, FreeBSD, and macOS. By Salman Ravoof.

MailHog works by setting up a fake SMTP server on the developer’s local machine. The developer can then configure their web application to use MailHog’s SMTP server to send and receive emails. This allows the developer to test their email functionality without having to send emails to a real server.

MailHog provides a number of features that make it a valuable tool for developers, including:

  • Easy to set up and use: MailHog can be installed and configured in just a few minutes. It does not require any external dependencies, and it can be run on any operating system that supports Go.
  • Web-based interface: MailHog provides a web-based interface that allows the developer to view all of the emails that have been sent and received by MailHog. The interface also allows the developer to search for emails and to view the contents of emails.
  • SMTP server emulation: MailHog emulates a real SMTP server, so the developer can test their email functionality without having to send emails to a real server. This can help to prevent problems with spam filters and blacklists.
  • SMTP server logging: MailHog logs all of the SMTP traffic that it handles. This can be helpful for troubleshooting problems with email delivery.

Overall, MailHog is a powerful and versatile tool that can help developers to test their email functionality more efficiently. Good read!

[Read More]