EN
PL

Category: pl

Starlink – an overview of SpaceX's satellite internet services



/Starlink – an overview of SpaceX's satellite internet services

In recent years SpaceX, the rocket company founded by billionaire entrepreneur Elon Musk, has hit the headlines with its ambitious plans to establish a satellite Internet service called Starlink. It aims to provide high-speed, low-delay Internet access to users around the world, including in remote and rural areas where traditional Internet infrastructure is unavailable or unreliable. In this article, we will provide an overview of Starlink and its potential impact on the Internet industry.

What is Starlink?

Starlink is a satellite Internet service developed by SpaceX to provide users around the world with access to high-speed, low-latency Internet. The service is based on a network of thousands of small satellites that are launched into low Earth orbit, which is much closer to Earth than the traditional geostationary satellites used for Internet communications. This proximity to Earth allows for higher data transfer speeds and lower latency.

How does it work?

The Starlink satellite network is designed to provide Internet access to users on the ground through a network of user terminals, which are essentially small, flat devices the size of a pizza box that can be mounted on a rooftop or other location with a good view of the sky. The user terminals communicate with Starlink satellites to provide users with high-speed, low-latency Internet access. The Starlink network is designed to be highly scalable, with plans to deploy thousands of additional satellites in the coming years. This will enable Starlink to provide Internet access to more users, especially in remote and rural areas where traditional Internet infrastructure is unavailable or unreliable.

What can we achieve with Starlink?

The potential benefits of Starlink are numerous, especially for users in remote and rural areas. With this technology, users can gain access to high-speed, low-latency Internet that is comparable to or better than traditional wired Internet services. This can help bridge the digital divide, allowing more people to participate in the digital economy and access educational and healthcare resources. Starlink can also provide backup Internet services in places where traditional Internet infrastructure is prone to failure or disruption, such as during natural disasters or other emergencies. This can help improve communication and coordination during a crisis, potentially saving lives and reducing damage.

What are the challenges and limitations of Starlink?

While Starlink has the potential to be a world-changing technology for the Internet industry, it also faces several challenges and limitations. One of the biggest challenges is the cost of deploying and maintaining a satellite network, which is still quite high compared to traditional wired Internet infrastructure. Starlink is still in the early stages of deployment and it remains to be seen how well it will perform in real-world conditions, especially in areas with inclement weather or other environmental conditions that could affect signal quality. There are also concerns about the impact of the Starlink satellite network on astronomical research, as bright reflections from satellites could interfere with telescope observations.

Summary

Starlink is a satellite Internet service developed by SpaceX that aims to provide high-speed, low-latency Internet access to users around the world, including those in remote and rural areas. Although the technology is still in the early stages of deployment and faces several challenges and limitations, it has the potential to be a world-changing technology for the Internet industry. As more satellites are launched and networks are expanded, we can expect to see even more innovative applications of this technology in various industries and applications.


post->ID ); if ( ! empty( $postcat ) ) { if($postcat[0]->cat_ID == 7) { echo ''; } else { echo ''; } } ?>

Supermicro Ultra SuperServer



/Supermicro Ultra SuperServer

Supermicro Ultra SuperServer® is Supermicro's 11th-generation, high-performance general-purpose server. Ultra is designed to deliver superior performance, flexibility, scalability and serviceability in demanding IT environments, and to power critical enterprise workloads.

Unmatched performance: support for two scalable second-generation Intel® Xeon® processors with up to 28 cores per socket and up to 6 TB of ECC DDR4 memory in 24 DIMM slots with support for Intel® Optane Technology makes Ultra designed to handle demanding and complex workloads . Ultra is available in NVMe all-flash configurations, where users can benefit from reduced latency and increased IOP. With NVMe, it is possible to increase storage latency by up to 7 times and increase throughput by up to 6 times.1 The ROI benefits from NVMe deployments are immediate and significant.

Exceptional flexibility: Discover the freedom to adapt to different workloads with the versatile Supermicro Ultra system. Enhance your server environment with the perfect combination of computing power, memory and storage performance, network flexibility and serviceability. This highly scalable system provides excellent expansion and storage options with our patented vertical system. With support for multiple additional PCIe cards, the Ultra future-proofs your business with ever-changing computing and storage. This Ultra server is designed to handle any workload in any number of demanding environments.

Continued reliability and ease of service: Achieve higher levels of high availability and storage with the latest scalable Intel® Xeon® processors, ECC DDR4 memory modules, hot-swappable drive bays with NVMe support and energy-efficient redundant power supplies. Designed from the ground up as enterprise grade, Ultra is fully equipped with energy-efficient components and built-in redundancy.


post->ID ); if ( ! empty( $postcat ) ) { if($postcat[0]->cat_ID == 7) { echo ''; } else { echo ''; } } ?>

Artificial intelligence in cloud computing



/Artificial intelligence in cloud computing

Artificial Intelligence has become one of the hottest topics in the IT world in the last few years. From startups to the largest global companies. Everyone is trying to find new ways to use it to build applications or systems that would work better than previous versions, would enable the creation of things that were previously impossible or too time-consuming.
The very concept of artificial intelligence, the mechanisms that we use to build intelligent systems, is mostly nothing new and has been around for several decades. After all, it is right now that we observe how seriously it can change our world.

There are several reasons. First, in the age of the Internet, we have access to an almost unlimited set of training data that we can use to make our models better and more accurate. However, even if we do not want to use data from the Internet, it may turn out that our own data, which we collect non-stop on our servers, may be fully sufficient. The second reason is access to a sufficiently large computing power – learning artificial intelligence models is not a trivial activity and requires appropriate server facilities. It turns out that in the era of cloud computing, where we can run 1 or 10,000 servers in a few minutes, computing power is no longer a problem. The third reason for the breakthrough, which is somewhat the result of the previous ones, is the development or improvement of AI algorithms and tools (e.g.TensorFlow, Apache MXNet, Gluon and many more).

So we can see that building applications and systems using AI has become easier, but it is still not trivial. Is it possible, then, to simplify it even more? Yes, thanks to the use of cloud computing. It turns out that apart from the well-known services, such as virtual servers, databases or archiving services, there are also AI services.

Each IT project that will use AI is of course different, although it may turn out that we are able to choose services from the cloud computing portfolio, thanks to which we will be able to implement it faster. Companies that do not have AI competencies, but still want to take advantage of its advantages, can use ready-made AI services that have been developed to solve common problems.


post->ID ); if ( ! empty( $postcat ) ) { if($postcat[0]->cat_ID == 7) { echo ''; } else { echo ''; } } ?>

Programmable computer networks. Evolution of data transfer.



/Programmable computer networks. Evolution of data transfer.

Computer hardware is increasingly being factory-equipped with high-speed 2.5G, 5G and even 10G network cards and wireless WiFi 5, 6 and 6E modules. Wanting to take advantage of its communication capabilities, it is necessary to prepare an appropriate infrastructure in the company. Efficient management of an extensive network is sometimes troublesome. With help comes SDN.

SDN (Software Defined Network) technology, i.e. software-controlled networks or, as they are often referred to, programmable computer networks, is the concept of LAN / WLAN network management, which consists in separating the physical network infrastructure related to data transmission from the software layer that controls its operation. Thanks to this, there is the option of centrally managing the network without considering its physical structure. In other words, the administrator can control many elements of the infrastructure, such as routers, access points, switches, firewalls, as if it were one device. Data transmission is controlled at the level of the global corporate network, not related to individual devices. The OpenFlow protocol is most often, but not always, used to control such a network. The concept of SDN was created to solve problems with the configuration of a large number of network devices. The static architecture of traditional networks is decentralized and complex, while at present, in the era of virtualization and cloud systems, a company network is required to be much more flexible and easily solve emerging problems. SDN centralizes „network intelligence” in one element, separating the forwarding of network packets (data plane) from the routing process (control plane). The control plane consists of one or more controllers – this can be considered the brain of the SDN network, where all intelligence is concentrated.

In SDN solutions, network administrators gain full insight into its topology at any time, which allows for better and automatic allocation of network traffic, especially during periods of increased data transmission. SDNs help reduce operational costs and capital expenditure.


post->ID ); if ( ! empty( $postcat ) ) { if($postcat[0]->cat_ID == 7) { echo ''; } else { echo ''; } } ?>

Used mining GPU. Avoid like the fire?



/Used mining GPU. Avoid like the fire?

Usually, you may want to avoid graphics cards that have been used around the clock to mine cryptocurrency. But that’s not necessarily the case with a great GPU shortage, when the best graphics cards are always unavailable, even at exorbitant prices.

An obvious concern with purchasing a graphics card to use for mining is that performance will degrade significantly and the GPU will crash faster than expected. However, this is generally not the case. In our experience, mining GPUs do not show a significant reduction in performance. Let’s explore some possible reasons and some caveats.


Seasoned GPU miners typically reduce power consumption and overclock their GPU to make the graphics card more efficient, only increasing memory performance. In contrast, the gamer will want to overclock the GPU, which is a more risky endeavor. Miners run their graphics cards 24/7, but this can also help minimize the heating / cooling cycle that is harmful to silicon. But there are certainly other dangers. Heat is a major concern of GPUs. Problems can arise if they have been used for extraction in a very hot environment without adequate airflow.

Graphics cards may come to the market as the value of cryptocurrencies declines. If you are interested in buying such a card, it is worth asking the seller for details such as its working time, the temperature at which the card worked and what thermal parameters it achieved, and whether it was overclocked.


post->ID ); if ( ! empty( $postcat ) ) { if($postcat[0]->cat_ID == 7) { echo ''; } else { echo ''; } } ?>

AMD EPYC - Does Intel still have a chance?



/AMD EPYC - Does Intel still have a chance?

The AMD EPYC 7643 from the Milan family was noticed in the Geekbench benchmark database. There it showed the power of the Zen 3 microarchitecture-based cores. It turns out that one processor scored better in the test compared to the platform consisting of dual Intel Xeon.

The EPYC 7643 is one of the new AMD Milan series processors. It is a 48-core chip with support for up to 96 threads, which is based on the Zen 3 microarchitecture. It was noticed in the Geekbench benchmark database. Here we have the opportunity to learn about CPU performance

AMD EPYC 7643 in the test of one core in Geekbench 4 dialed 5850 points. In the case of many cores, it was over 121 thousand. points. We know that the basic clock speed here is 2.3 GHz. However, in the case of increased power, it is a maximum of 3.6 GHz.

The performance of this AMD EPYC chip is really high. For comparison, the dual Intel Xeon Platinum 8276 with 56 cores and supporting 112 threads in the Single Core test sets 4913 points. However, in the Multi-Core it is 112,457 points. So the new processor from the competition is better.

Source: www.komputerswiat.pl


post->ID ); if ( ! empty( $postcat ) ) { if($postcat[0]->cat_ID == 7) { echo ''; } else { echo ''; } } ?>