Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Build a blog using Django, GraphQL, and Vue

Categories

Tags restful json web-development apis python nosql

This tutorial will take you through the process of building a Django blog back end and a Vue front end, using GraphQL to communicate between them. By Philipp Acsany.

For this project, you’ll create a small blogging application with some rudimentary features: Authors can write many posts, posts can have many tags and can be either published or unpublished.

Further in the article:

  • Prerequisites
  • Set up the Django Blog
  • Build a blog using Django, GraphQL, and Vue
  • Set up Graphene-Django
  • Set up django-cors-headers
  • Set up Vue.js
  • Create basic views and components
  • Update the Vue components
  • Implement Vue Apollo
  • Fetch the data

You’ve seen how you can use GraphQL for building typed, flexible views of your data. You can use these same techniques on an existing Django application you’ve built or one you plan to build. Like other APIs, you can use yours in most any client-side framework as well. This is extensive tutorial with all code attached and explained. Nice one!

[Read More]

Researchers harness 2D magnetic materials for energy-efficient computing

Categories

Tags ai fintech servers performance data-science

An MIT team precisely controlled an ultrathin magnet at room temperature, which could enable faster, more efficient processors and computer memories. By Adam Zewe.

Experimental computer memories and processors built from magnetic materials use far less energy than traditional silicon-based devices. Two-dimensional magnetic materials, composed of layers that are only a few atoms thick, have incredible properties that could allow magnetic-based devices to achieve unprecedented speed, efficiency, and scalability. This is key, since magnets composed of atomically thin van der Waals materials can typically only be controlled at extremely cold temperatures, making them difficult to deploy outside a laboratory.

The researchers used pulses of electrical current to switch the direction of the device’s magnetization at room temperature. Magnetic switching can be used in computation, the same way a transistor switches between open and closed to represent 0s and 1s in binary code, or in computer memory, where switching enables data storage.

In the future, such a magnet could be used to build faster computers that consume less electricity. It could also enable magnetic computer memories that are nonvolatile, which means they don’t leak information when powered off, or processors that make complex AI algorithms more energy-efficient. Interesting read!

[Read More]

Powering and cooling AI and accelerated computing in the data room

Categories

Tags ai servers cloud miscellaneous big-data

Artificial intelligence (AI) is here, and it is here to stay. “Every industry will become a technology industry,” according to NVIDIA founder and CEO, Jensen Huang. The use cases for AI are virtually limitless, from breakthroughs in medicine to high-accuracy fraud prevention. AI is already transforming our lives just as it is transforming every single industry. It is also beginning to fundamentally transform data center infrastructure. By Anton Chuchkov, Brad Wilson.

AI workloads are driving significant changes in how we power and cool the data processed as part of high-performance computing (HPC). A typical IT rack used to run workloads from 5-10 kilowatts (kW), and racks running loads higher than 20 kW were considered high-density – a rare sight outside of very specific applications with narrow reach. Mark Zuckerberg announced that by the end of 2024, Meta will spend billions to deploy 350,000 H100 GPUs from NVIDIA. Rack densities of 40 kW per rack are now at the lower end of what is required to facilitate AI deployments, with rack densities surpassing 100 kW per rack becoming commonplace and at large scale in the near future.

The transition to accelerated computing will not happen overnight. Data center and server room designers must look for ways to make power and cooling infrastructure future-ready, with considerations for the future growth of their workloads. Getting enough power to each rack requires upgrades from the grid to the rack. In the white space specifically, this likely means high amperage busway and high-density rack PDUs. To reject the massive amount of heat generated by hardware running AI workloads, two liquid cooling technologies are emerging as primary options: Direct-to-chip liquid cooling, Rear-door heat exchangers.

While direct-to-chip liquid cooling offers significantly higher density cooling capacity than air, it is important to note that there is still excess heat that the cold plates cannot capture. This heat will be rejected into the data room unless it is contained and removed through other means such as rear-door heat exchangers or room air cooling. Interesting read!

[Read More]

Nginx Proxy Manager: A complete guide

Categories

Tags nginx web-development cloud software-architecture devops

Nginx Proxy Manager (NPM) is an open-source and free application designed to simplify the management of Nginx’s proxy, SSL, Access Lists, and more. It is built with a user-friendly dashboard that aims to help those users who aren’t exactly Nginx CLI experts. Plus, it also provides free SSL via Let’s Encrypt, Docker integration, and support for multiple users. By Diego Asturias.

In this complete guide, we aim to provide an introduction to Nginx Proxy Manager. We go through the basics of what is NPM, how it works, its features, and more. Author provides information about:

  • What is Nginx Proxy Manager?
  • NPM versus native NGINX configurations
  • How to install Nginx Proxy Manager?
  • How to use the Nginx Proxy Manager UI and configure it?
  • Vital configurations and initial setup
  • Nginx Proxy Manager FAQ

The tool allows users to easily expose web services within their network (or computer). In other words, it provides a secure and efficient gateway for internet traffic. Likewise, its integration with Let’s Encrypt enables users to secure their services with free SSL certificates. Nice one!

[Read More]

Apache web server hardening and security guide

Categories

Tags apache web-development cloud software-architecture infosec

The Web Server is a crucial part of web-based applications. Apache Web Server is often placed at the edge of the network; hence it becomes one of the most vulnerable services to attack. A practical guide to secure and harden Apache HTTP Server. By Chandan Kumar.

Having default configuration supply much sensitive information which may help hacker to prepare for an attack of the applications. The majority of web application attacks are through XSS, Info Leakage, Session Management and SQL Injection attacks which are due to weak programming code and failure to sanitize web application infrastructure.

Practical advise in the article contains:

  • Remove server version banner
  • Disable directory browser listing
  • Etag
  • Run Apache from a non-privileged account
  • Protect binary and configuration directory permission
  • System settings protection
  • HTTP request methods
  • Disable trace HTTP request
  • Set cookie with HttpOnly and secure flag
  • X-XSS protection
  • Mod security

… and more. The article is great helper tool for middleware administrator, application support, system analyst, or anyone working or eager to learn Hardening & Security guidelines. Good read!

[Read More]

Most cloud-based genAI performance stinks

Categories

Tags ai cloud performance teams cio

Without basic computer architecture best practices, generative AI systems are sluggish. Here are a few tips to optimize complex systems. By David Linthicum.

Performance is often an afterthought with generative AI development and deployment. Most deploying generative AI systems on the cloud, and even not the cloud, have yet to learn what the performance of their generative AI systems should be, take no steps to determine performance, and end up complaining about the performance after deployment. Or, more often, the users complain, and then generative AI designers and developers complain to me.

Author also discusses in this blog:

  • Complex deployment landscapes
  • AI model tuning
  • Vendors could have done a better job establishing best practices
  • Security concerns
  • Regulatory compliance

Implement automation for scaling and resource optimization, or autoscaling, which cloud providers provide. This includes using machine learning operations (MLOps) techniques and approaches for operating AI models.

At their essence, generative AI systems are complex, distributed data-oriented systems that are challenging to build, deploy, and operate. They are all different, with different moving parts. Most of the parts are distributed everywhere, from the source databases for the training data, to the output data, to the core inference engines that often exist on cloud providers. Nice one!

[Read More]

Why and how to use site reliability golden signals

Categories

Tags devops app-development performance teams

Engineers use SRE metrics to benchmark and improve the reliability and performance of systems and services. Learn more about the 4 golden signals (latency, errors, traffic, saturation). By @cortex.io.

Software complexity makes it harder for teams to rapidly identify and resolve issues. IT service management has evolved from an afterthought to a central part of DevOps. Microservices architectures are prone to delay or missed identification of such concerns.

Further you will learn:

  • What is site reliability engineering (SRE)?
  • The core components of site reliability engineering
  • What are SRE metrics and why are they important?
  • What are the four golden signals of SREs?
    • Latency
    • Traffic
    • Errors
    • Saturation
  • Best practices for measuring and improving SRE metrics

Your priorities will change, and your metrics should evolve with them. For one year, you might be more concerned with incident management than having your team resolve incidents rapidly. In that case, you may be interested in tracking the mean time to recovery and latency. Interesting read!

[Read More]

Introducing DBOS cloud: Transactional serverless computing on a cloud-native OS

Categories

Tags cloud database serverless cio

The idea for DBOS (DataBase oriented Operating System) originated 3 years ago with my realization that the state an operating system must maintain (files, processes, threads, messages, etc.) has increased in size by about 6 orders of magnitude since I began using Unix on a PDP-1140 in 1973. By Mike Stonebraker.

Today, we’re releasing DBOS Cloud, a transactional serverless platform built on DBOS, targeting stateful Typescript applications. DBOS Cloud is no ordinary serverless platform. Because it’s built on the DBOS operating system, it offers powerful and unique features, including reliable execution and time travel.

If code running on a DBOS program is ever interrupted, it automatically resumes from where it left off without repeating any of the work already performed. Programs always run to completion, and their operations execute once and only once. DBOS lets you “rewind time” and restore the state of an application to what it was at any point in the past. In today’s release, we provide a time travel debugger, which lets you replay any DBOS Cloud trace locally on your laptop, exactly as it originally happened. You can step through past executions to reproduce rare bugs and even run new code against historical state. In the near future, we also plan to release time travel for disaster recovery, allowing you to rollback your application and its data to any past state.

DBOS Cloud is easy and free for you to try. Interesting read!

[Read More]

Build an API for your front end using Pages Functions

Categories

Tags cloud microservices serverless software-architecture apis devops

In this tutorial, you will build a full-stack Pages application. Your application will contain a front end, built using Cloudflare Pages and the React framework and a JSON API, built with Pages Functions, that returns blog posts that can be retrieved and rendered in your front end. By @cloudflare.com.

This article will guide you through:

  • Introduction
  • Build your front end
    • Create a new React project
    • Set up your React project
  • Build your API
    • Write your Pages Function
  • Deploy
    • Deploy with Wrangler
    • Deploy via the dashboard
      • Create a new repository
      • Deploy with Cloudflare Pages

To deploy via the Cloudflare dashboard, you will need to create a new Git repository for your Pages project and connect your Git repository to Cloudflare. This tutorial uses GitHub as its Git provider. Clear instructions and code provided as well. Nice one!

[Read More]

How to build a custom API Gateway with Node.js

Categories

Tags devops microservices software-architecture apis learning

In the era of microservices, where applications are divided into smaller, independently deployable services, managing and securing the communication between these services becomes crucial. This is where an API gateway comes into play. By Iroro Chadere.

In the article:

  • What is an API Gateway?
  • Benefits of using an API Gateway
  • Security in API Gateways
  • How to build a custom API Gateway with Node.js

Building a custom API gateway with Node.js offers developers a flexible and customizable solution for managing, routing, and securing API calls in a microservices architecture. Throughout this tutorial, we’ve explored the fundamental concepts of API gateways, including their role in simplifying client-side code, improving scalability and performance, and enhancing security. Good read!

[Read More]