EN
PL

Month: April 2023

ITAD and E-Waste Recycling, What are the differences?



/ITAD and E-Waste Recycling, What are the differences?

Electronic devices, for example smartphones, laptops, televisions, household appliances, are an integral part of our daily lives. However, the constant upgrading and discarding of these devices creates electronic waste, which can harm the environment and people's health. The solution to this problem is electro-recycling, which provides a safe way to recycle electronic devices.However, electronics recycling and IT asset disposal (ITAD) are two terms that are often used interchangeably, but are actually two separate processes. Although both involve the proper disposal of electronic equipment, they have different goals and methods. In this article, we will discuss the differences between electronics recycling and ITAD.

Electronics recycling

Electronics recycling is the process of collecting, disassembling and separating various components of electronic devices to recover valuable materials such as copper, aluminum and precious metals. Recycling helps reduce the amount of electronic waste that ends up in landfills, conserves natural resources and reduces the environmental impact of manufacturing new electronic devices.

IT Asset Disposal (ITAD)

This is a more comprehensive process that includes proper management of all aspects of decommissioned IT assets. This includes data sanitization, secure storage, remarketing and environmentally friendly disposal. ITAD's goal is to maximize the value of decommissioned IT assets while minimizing risks related to data security, compliance and environmental impact.

Data Erasure

One of the key differences between electronics recycling and ITAD is the emphasis on data security. In ITAD, data sanitization is a critical part of the process. It involves securely removing or destroying data from decommissioned IT assets to ensure that sensitive information does not fall into the wrong hands. Data sanitization must be carried out in accordance with industry standards and regulations to ensure compliance.

Refurbishing & remarketing 

This process involves assessing the condition and value of decommissioned IT equipment that may need to be repaired or data deleted before being sold. ITAD providers can resell the equipment through online marketplaces or buy-back programs. Remarketing benefits businesses by recovering value and providing affordable options for individuals and small businesses. It also benefits the environment by reducing electronic waste and conserving resources.

Differences between electronics recycling and ITAD

Although electronics recycling and ITAD share the common goal of reducing electronic waste, there are some key differences between the two practices. Electronics recycling focuses on recovering valuable materials from devices, while ITAD deals with the safe disposal of decommissioned IT equipment. ITAD service providers must adhere to strict data security standards and ensure that all data is safely removed before disposal or resale.

Benefits of electronics recycling and ITAD

Electronics recycling and ITAD offer a number of benefits, including environmental and economic advantages. Electronics recycling reduces waste in landfills and conserves natural resources. It also creates employment opportunities in the recycling industry. ITAD provides companies with a safe and cost-effective way to dispose of decommissioned IT equipment while ensuring that sensitive data is safely disposed of. ITAD also allows companies to recover some of the value of decommissioned IT equipment through resale or donation.

Summary

Electronics recycling and ITAD are two important practices that help reduce electronic waste and promote a more sustainable future. While electronics recycling focuses on recovering valuable materials from electronic devices, ITAD deals with the safe disposal of recalled IT equipment. By partnering with reputable electronics recyclers and ITAD, such as SDR-IT along with strategic partner COMPAN-IT, individuals and companies can be assured that their electronic waste is properly managed and disposed of in a safe and environmentally friendly manner.


post->ID ); if ( ! empty( $postcat ) ) { if($postcat[0]->cat_ID == 7) { echo ''; } else { echo ''; } } ?>

Servers and Data Centers



/Servers and Data Centers

Data centers and servers are the backbone of today's digital world. They store, process and transmit huge amounts of data every day, enabling us to access information, communicate with others and conduct business online. In this article, we will outline the importance of data centers and servers, how they operate, and the challenges and trends shaping their future.

What is a data center?

A data center is a facility used to store computer systems and related components, such as telecommunications and storage systems. Data centers are designed to provide high levels of availability, security and reliability to ensure that stored and processed data is always available and protected.

They come in a variety of sizes, from small server rooms to large corporate facilities that can cover hundreds of square meters. Some data centers are owned and operated by individual organizations, while others are operated by third-party service providers and offer hosting services to multiple customers.

How do the servers work?

Servers are the backbone of data centers, providing the computing power needed to process and store data. A server is a computer system that is designed to provide specific services or resources to other computers or devices connected to a network.

Servers can perform many functions, such as hosting websites, running applications and storing and processing data. A server can be a physical machine or a virtual machine that runs on top of a physical machine. Virtualization technology allows multiple virtual servers to run on a single physical machine, allowing organizations to maximize computing resources and reduce costs.

Challenges and trends

As the demand for digital services continues to grow, data centers and servers face several challenges and trends that will shape their future.

  • One of the primary challenges is the need for greater energy efficiency. Data centers consume huge amounts of energy, and as the number of data centers grows, so does their environmental impact. To meet this challenge, data centers are adopting more energy-efficient technologies, such as advanced cooling systems, and using renewable energy sources such as solar and wind power.
  • Another challenge is the need for greater security. Data breaches can have serious consequences, both for organizations and individuals. Data centers are implementing more stringent security measures, such as multi-factor authentication and encryption, to protect against cyber attacks.
  • In terms of trends, "edge processing" is becoming an important trend in data center and server architecture. It involves processing data closer to the source, reducing latency and improving performance. This is especially important for applications requiring real-time data processing, such as autonomous vehicles and industrial automation.

Summary

Data centers and servers are essential components of the digital infrastructure that supports our modern world. They enable us to access and store vast amounts of information, and provide the computing power needed for critical applications and services. As the demand for digital services continues to grow, data centers and servers will face ongoing challenges and trends that will shape their future. By adopting innovative technologies and strategies, data centers and servers can continue to evolve and meet the needs of our rapidly changing digital world.


post->ID ); if ( ! empty( $postcat ) ) { if($postcat[0]->cat_ID == 7) { echo ''; } else { echo ''; } } ?>

Virtual Machine - What Is It and Why Is It So Useful?



/Virtual Machine - What Is It and Why Is It So Useful?

Many of today's cutting-edge technologies, such as cloud computing, edge computing and microservices, owe their origins to the concept of the virtual machine - the separation of operating systems and software instances from the underlying physical computer.

What is a virtual machine?

A virtual machine (VM) is software that runs programs or applications without being tied to a physical machine. A VM instance on a host computer can run one or more guest machines. Each VM has its own operating system and runs independently of other VMs, even if they are on the same physical host. VMs are typically run on servers, but they can also run on desktop systems and even embedded platforms. Multiple virtual machines can share resources with a physical host, including CPU cycles, network bandwidth and memory.

Virtual machines can be said to have originated at the dawn of computing in the 1960s, when time-sharing was used for mainframe users to separate software from the physical host system. A virtual machine was defined in the early 1970s as "an efficient, isolated duplicate of a real computer machine."

Virtual machines as we know them today gained popularity over the past 20 years as companies embraced server virtualization to use the processing power of physical servers more efficiently, reducing their number and saving space in the data center. Since applications with different system requirements could be run on a single physical host, no different server hardware was required for each application.

How do virtual machines work?

In general, there are two types of virtual machines: process virtual machines, which spin up a single process, and system virtual machines, which offer full separation of the operating system and applications from the physical computer. Examples of process virtual machines include the Java Virtual Machine, the .NET Framework and the Parrot virtual machine. System virtual machines rely on hypervisors as intermediaries that give software access to hardware resources. A hypervisor emulates a computer's CPU, memory, hard drive, network and other hardware resources, creating a pool of resources that can be allocated to individual virtual machines according to their specific requirements. Hypervisor can support multiple virtual hardware platforms that are isolated from each other, allowing virtual machines to run Linux and Windows Server operating systems on the same physical host.

Well-known hypervisor vendors include VMware (ESX/ESXi), Intel/Linux Foundation (Xen), Oracle (MV Server for SPARC and Oracle VM Server for x86) and Microsoft (Hyper-V). Desktop systems can also use virtual machines. An example would be a Mac user running a virtual instance of Windows on his physical Mac hardware.

What are the two types of hypervisors?

The hypervisor manages and allocates resources to virtual machines. In addition, it plans and adjusts how resources are distributed based on the configuration of the hypervisor and virtual machines, and can reallocate resources when demand changes. Most hypervisors fall into one of two categories:

  • Type 1 - A bare-metal hypervisor runs directly on a physical host and has direct access to its hardware. Type-1 hypervisors typically run on servers and are considered more powerful and better performing than type-2 hypervisors, making them well suited for server, desktop and application virtualization. Examples of type 1 hypervisors include Microsoft Hyper-V and VMware ESXi.
  • Type 2 - Sometimes called a hosted hypervisor, a Type 2 hypervisor is installed on top of the host machine's operating system, which manages connections to hardware resources. Type 2 hypervisors are typically deployed on end-user systems for specific use cases. For example, a developer might use a Type 2 hypervisor to create a specific environment for building an application, or a data analyst might use it to test an application in an isolated environment. Examples include VMware Workstation and Oracle VirtualBox.

What are the advantages of virtual machines?

Because the software is separate from the physical host computer, users can run multiple instances of the operating system on a single piece of hardware, saving the company time, management costs and physical space. Another advantage is that VMs can support older applications, reducing or eliminating the need and cost of migrating an older application to an updated or different operating system. In addition, developers use VMs to test applications in a secure, sandboxed environment. Developers who want to see if their applications will run on a new operating system can use VMs to test software instead of buying new hardware and operating system in advance. For example, Microsoft recently updated its free Windows virtual machines, which allow developers to download an evaluation virtual machine running Windows 11 to try out the operating system without having to upgrade the main computer. This can also help isolate malware that might infect a particular VM instance. Since the software in the virtual machine can't manipulate the main computer, the malware can't spread as much damage.

What are the disadvantages of virtual machines?

Virtual machines have several drawbacks. Running multiple VMs on a single physical host can result in unstable performance, especially if the infrastructure requirements for the application are not met. This also makes them less efficient compared to a physical computer in many cases. In addition, if a physical server fails, all applications running on it will stop working. Most IT stores use a balance between physical and virtual systems.

What are other forms of virtualization?

The success of virtual machines in server virtualization has led to the application of virtualization in other areas, such as storage, networking and desktops. If a certain type of hardware is used in the data center, it is possible to virtualize it (such as application delivery controllers). In terms of network virtualization, companies are exploring network-as-a-service options and network functions virtualization (NFV), which uses commodity-class servers to replace specialized network equipment to enable more flexible and scalable services. This differs somewhat from software-defined networking, which separates the network control plane from the data forwarding plane to enable more automated policy-based provisioning and management of network resources. The third technology, virtual network functions, are software-based services that can run in an NFV environment, including processes such as routing, firewall, load balancing, WAN acceleration and encryption.

Verizon, for example, uses NFV to provide its virtual network services, which enable customers to launch new services and capabilities on demand. Services include virtual applications, routing, software-defined WANs, WAN optimization and even Session Border Controller as a Service (SBCaaS) for centrally managing and securely deploying IP-based real-time services such as VoIP and unified communications.

Virtual machines and containers

The development of virtual machines has led to the further development of technologies such as containers, which represent the next step in the development of this concept and are gaining recognition among web application developers. In a container environment, a single application with its dependencies can be virtualized. With much less imposition than a virtual machine, a container contains only binaries, libraries and applications. While some believe that container development can kill a VM, there are enough capabilities and benefits of VMs to keep the technology going. For example, VMs remain useful when running multiple applications together or when running legacy applications on older operating systems. Also, according to some, containers are less secure than VM hypervisors because containers have only one operating system that applications share, while VMs can isolate the application and operating system. Gary Chen, research manager at IDC's Software-Defined Compute division, said the virtual machine software market remains a foundational technology, even as customers explore cloud and container architectures. "The virtual machine software market has been remarkably resilient and will continue to grow positively over the next five years, despite being highly mature and approaching saturation," Chen writes in IDC's study "Worldwide Virtual Machine Software Forecast, 2019-2022."

Virtual machines, 5G and edge computing

Virtual machines are seen as part of new technologies such as 5G and edge computing. For example, virtual desktop infrastructure (VDI) providers such as Microsoft, VMware and Citrix are looking for ways to extend their VDI systems to employees who now work from home under a post-COVID hybrid model. "With VDI, you need extremely low latency, because you're sending your keystrokes and mouse movements to essentially a remote desktop," - says Mahadev Satyanarayanan, a professor of computer science at Carnegie Mellon University. In 2009, Satyanarayanan wrote about how virtual machine-based clouds could be used to provide better processing capabilities to mobile devices at the edge of the Internet, which led to the development of edge computing. In the 5G wireless space, the network slicing process uses software-defined networks and NFV technologies to help install network functionality on virtual machines on a virtualized server to provide services that used to run only on proprietary hardware. Like many of the technologies in use today, these emerging innovations would not have emerged if not for the original virtual machine concepts introduced decades ago.

źródło: https://www.computerworld.pl


post->ID ); if ( ! empty( $postcat ) ) { if($postcat[0]->cat_ID == 7) { echo ''; } else { echo ''; } } ?>

Virtual Machine – What Is It And Why Is It So Useful?



/Virtual Machine – What Is It And Why Is It So Useful?

Many of today’s cutting-edge technologies such as cloud computing, edge computing and microservices, owe their start to the concept of the virtual machine—separating operating systems and software instances from the underlying physical computer.

What is a virtual machine?

A virtual machine (VM) is software that runs programs or applications without being tied to a physical machine. In a VM instance, one or more guest machines can run on a host computer.Each VM has its own operating system, and functions separately from other VMs, even if they are located on the same physical host. VMs generally run on servers, but they can also be run on desktop systems, or even embedded platforms. Multiple VMs can share resources from a physical host, including CPU cycles, network bandwidth and memory.

VMs trace their origins to the early days of computing in the 1960s when time sharing for mainframe users was used to separate software from a physical host system. A virtual machine was defined in the early 1970s as “an efficient, isolated duplicate of a real computer machine.”

VMs as we know them today have gained steam over the past 20 years as companies adopted server virtualization in order to utilize the compute power of their physical servers more efficiently, reducing the number of physical servers and saving space in the data center. Because apps with different OS requirements could run on a single physical host, different server hardware was not required for each one.

How do VMs work?

In general, there are two types of VMs: Process VMs, which separate a single process, and system VMs, which offer a full separation of the operating system and applications from the physical computer. Examples of process VMs include the Java Virtual Machine, the .NET Framework and the Parrot virtual machine.

Virtual Machine, the .NET Framework and the Parrot virtual machine. System VMs rely on hypervisors as a go-between that give software access to the hardware resources. The hypervisor emulates the computer’s CPU, memory, hard disk, network and other hardware resources, creating a pool of resources that can be allocated to the individual VMs according to their specific requirements. The hypervisor can support multiple virtual hardware platforms that are isolated from each other, enabling VMs to run Linux and Windows Server OSes on the same physical host.

Big names in the hypervisor space include VMware (ESX/ESXi), Intel/Linux Foundation (Xen), Oracle (MV Server for SPARC and Oracle VM Server for x86) and Microsoft (Hyper-V). Desktop computer systems can also utilize virtual machines. An example here would be a Mac user running a virtual Windows instance on their physical Mac hardware.

What are the two types of hypervisors?

The hypervisor manages resources and allocates them to VMs. It also schedules and adjusts how resources are distributed based on how the hypervisor and VMs have been configured, and it can reallocate resources as demands fluctuate. Most hypervisors fall into one of two categories:

  • Type 1 – A bare-metal hypervisor runs directly on the physical host machine and has direct access to its hardware. Type 1 hypervisors typically run on servers and are considered more efficient and better-performing than Type 2 hypervisors, making them well suited to server, desktop and application virtualization. Examples of Type 1 hypervisors include Microsoft Hyper-V and VMware ESXi.
  • Type 2 – Sometimes called a hosted hypervisor, a Type 2 hypervisor is installed on top of the host machine’s OS, which manages calls to the hardware resources. Type 2 hypervisors are generally deployed on end-user systems for specific use cases. For example, a developer might use a Type 2 hypervisor to create a specific environment for building an application, or a data analyst might use it to test an application in an isolated environment. Examples include VMware Workstation and Oracle VirtualBox.

What are the advantages of virtual machines?

Because the software is separate from the physical host computer, users can run multiple OS instances on a single piece of hardware, saving a company time, management costs and physical space. Another advantage is that VMs can support legacy apps, reducing or eliminating the need and cost of migrating an older app to an updated or different operating system. In addition, developers use VMs in order to test apps in a safe, sandboxed environment. Developers looking to see whether their applications will work on a new OS can utilize VMs to test their software instead of purchasing the new hardware and OS ahead of time. For example, Microsoft recently updated its free Windows VMs that let developers download an evaluation VM with Windows 11 to try the OS without updating a primary computer. This can also help isolate malware that might infect a given VM instance. Because software inside a VM cannot tamper with the host computer, malicious software cannot spread as much damage.

What are the downsides of virtual machines?

Virtual machines do have a few disadvantages. Running multiple VMs on one physical host can result in unstable performance, especially if infrastructure requirements for a particular application are not met. This also makes them less efficient in many cases when compared to a physical computer. And if the physical server crashes, all of the applications running on it will go down. Most IT shops utilize a balance between physical and virtual systems.

What are some other forms of virtualization?

The success of VMs in server virtualization led to applying virtualization to other areas including storage, networking, and desktops. Chances are if there’s a type of hardware that’s being used in the data center, the concept of virtualizing it is being explored (for example, application delivery controllers). In network virtualization, companies have explored network-as-a-service options and network functions virtualization (NFV), which uses commodity servers to replace specialized network appliances to enable more flexible and scalable services. This differs a bit from software-defined networking, which separates the network control plane from the forwarding plane to enable more automated provisioning and policy-based management of network resources. A third technology, virtual network functions, are software-based services that can run in an NFV environment, including processes such as routing, firewalling, load balancing, WAN acceleration, and encryption. Verizon, for example, uses NFV to power its Virtual Network Services that enables customers to spin up new services and capabilities on demand. Services include virtual applications, routing, software-defined WANs, WAN optimization and even Session Border Controller as a Service (SBCaaS) to centrally manage and securely deploy IP-based real-time services, such as VoIP and unified communications.

VMs and containers

The growth of VMs has led to further development of technologies such as containers, which take the concept another step and is gaining appeal among web application developers. In a container setting, a single application along with its dependencies, can be virtualized. With much less overhead than a VM, a container only includes binaries, libraries, and applications. While some think the development of containers may kill the virtual machine, there are enough capabilities and benefits of VMs that keep the technology moving forward. For example, VMs remain useful when running multiple applications together, or when running legacy applications on older operating systems. In addition, some feel that containers are less secure than VM hypervisors because containers have only one OS that applications share, while VMs can isolate the application and the OS. Gary Chen, the research manager of IDC’s Software-Defined Compute division, said the VM software market remains a foundational technology, even as customers explore cloud architectures and containers. “The virtual machine software market has been remarkably resilient and will continue to grow positively over the next five years, despite being highly mature and approaching saturation,” Chen writes in IDC’s Worldwide Virtual Machine Software Forecast, 2019-2022.

VMs, 5G and edge computing

VMs are seen as a part of new technologies such as 5G and edge computing. For example, virtual desktop infrastructure (VDI) vendors such as Microsoft, VMware and Citrix are looking at ways to extend their VDI systems to employees who now work at home as part of a post-COVID hybrid model. “With VDI, you need extremely low latency because you are sending your keystrokes and mouse movements to basically a remote desktop,” says Mahadev Satyanarayanan, a professor of computer science at Carnegie Mellon University. In 2009, Satyanarayanan wrote about how virtual machine-based cloudlets could be used to provide better processing capabilities to mobile devices on the edge of the Internet, which led to the development of edge computing. In the 5G wireless space, the process of network slicing uses software-defined networking and NFV technologies to help install network functionality onto VMs on a virtualized server to provide services that once ran only on proprietary hardware. Like many other technologies in use today, these emerging innovations would not have been developed had it not been for the original VM concepts introduced decades ago.


post->ID ); if ( ! empty( $postcat ) ) { if($postcat[0]->cat_ID == 7) { echo ''; } else { echo ''; } } ?>

Supermicro Ultra SuperServer



/Supermicro Ultra SuperServer

Supermicro Ultra SuperServer® is Supermicro’s 11th generation high performance general purpose server. The Ultra is designed to provide the highest performance, flexibility, scalability and serviceability in demanding IT environments, as well as to power critical corporate workloads.

Unmatched performance: support for two 2nd Generation Intel® Xeon® Scalable processors with up to 28 cores per socket and up to 6TB of ECC DDR4 memory in 24 DIMM slots with Intel® Optane “¢ DCPMM support, the Ultra is designed to support demanding and complex loads. The Ultra is available in NVMe all-flash configurations where users can benefit from reduced latency and increased IOP. With NVMe, it is possible to increase storage latency up to 7x and increase throughput by up to 6x.1 The ROI benefits of NVMe deployments are immediate and significant.

Exceptional flexibility: discover the freedom to adapt to different loads with the versatile Supermicro Ultra system. Improve your server environment with the perfect combination of computing power, memory and storage performance, network flexibility and serviceability. This highly scalable system provides excellent expansion and storage options thanks to our patented vertical system. With support for multiple PCIe add-on cards, the Ultra Future protects your business against ever-changing computation and storage. This Ultra server is designed to handle any workload in any number of demanding environments.

Continuous reliability and serviceability: Achieve higher levels of high availability and data storage with the latest Intel® Xeon® Scalable processors, ECC DDR4 memory modules, NVMe-enabled disk bays, and energy-efficient redundant power supplies. Designed from the ground up as an enterprise class, the Ultra is fully equipped with energy-efficient components and built-in redundancy.

Supermicro Ultra Servers are designed to give the greatest possible power, flexibility and scalability. It is a great choice to meet the most demanding operations in Enterprise, Data Center and Cloud Computing environments.


post->ID ); if ( ! empty( $postcat ) ) { if($postcat[0]->cat_ID == 7) { echo ''; } else { echo ''; } } ?>