Programming On Linux – Greguti http://greguti.com/ Mon, 21 Jun 2021 22:07:55 +0000 en-US hourly 1 https://wordpress.org/?v=5.7.2 https://greguti.com/wp-content/uploads/2021/05/cropped-icon-32x32.png Programming On Linux – Greguti http://greguti.com/ 32 32 Former student John Goerzen uses personal plane to help others – The Sunflower https://greguti.com/former-student-john-goerzen-uses-personal-plane-to-help-others-the-sunflower/ https://greguti.com/former-student-john-goerzen-uses-personal-plane-to-help-others-the-sunflower/#respond Mon, 21 Jun 2021 19:03:09 +0000 https://greguti.com/former-student-john-goerzen-uses-personal-plane-to-help-others-the-sunflower/

Audrey Korte / The Sunflower

Former WSU student John Goerzen took his first volunteer job in March as a pilot with Angel Flight – an organization that provides free medical transportation to people who need non-urgent transportation to get to medical appointments.

John Goerzen is a native of Kansan with aviation in his blood.

The former WSU student flies away as often as he can, but it’s not just for fun. He uses his personal plane, a Bonanza A36, to improve the lives of others – donating his flight time and personal funds to help make a difference in the lives of others.

Goerzen undertook his first volunteering in March as a pilot at Angel Flight – an organization that provides free medical transportation, for people who need non-urgent transportation to get to medical appointments.

Goerzen also volunteers with EAA Young Eagles who offers 8 to 17 year olds free rides by plane.

Goerzen graduated in 2009 from WSU.

“I was a very non-traditional student,” he said. “I only did two and a half years of high school because I took so many classes in Wichita state.”

Goerzen was on the fast track to a degree, but put it on hold when he accepted a job outside of Wichita during his third year of school. Upon his return to Wichita, he qualified for his degree in computer science.

Since then he has made a career in programming and systems administration.

Goerzen has worked for Fastly for about four years. Fastly is a company that provides hosting services for some of the biggest websites on the internet.

Goerzen said the job is to provide services for some social media, news and video sites.

He was the former vice president of IT and engineering for eFolder, a cloud backup and disaster recovery company. Goerzen wrote books on programming and operating systems and he has developed several programs as Free Software. He has been a volunteer developer for the Debian GNU / Linux operating system since the late 1990s.

It’s clear that Goerzen enjoys connecting with people as much as he enjoys riding. This is what originally brought him to Angel Flight.

He said that from the early days, as he pursued a pilot’s license, he wanted to find a way to use his airtime to give back to others.

Pilot with a goal

On her first trip as an Angel Flight pilot in March, Goerzen took an elderly woman and her husband to Amarillo, TX, after her treatment for advanced cancer at the MD Anderson Cancer Center at the University of Texas at Houston.

“It’s a long drive and she was quite frail,” he said.

Goerzen said the woman was clearly quite weak from the treatment. She struggled to get on the plane and got in the back. But her husband wanted to get up front in the co-pilot’s seat. Goerzen helped him get on the wing and enter through the front door.

The man told Goerzen about his memories of the various towns he had lived in as they flew over the sky as they watched the sunset over Texas.

When they arrived at their destination, Goerzen helped the woman out of the plane. There the woman expressed to Goerzen how much he had helped her.

Former WSU student John Goerzen took his first volunteer job in March as a pilot with Angel Flight – an organization that provides free medical transportation to people who need non-urgent transportation to get to medical appointments. (Audrey Korte / The Sunflower)

“IIt’s a good feeling to do something right, ”said Goerzen. “It’s appreciated.”

While Goerzen enjoys doing what he does, that work comes at a cost. Goerzen donates not only his time but the wear and tear of his personal plane. The cost of fuel for these trips comes out of his pocket.

“Ever since I flew to Abilene, Texas, then Amarillo, then back here in Newton, it was, you know, several hundred dollars,” Goerzen said.

Goerzen said the costs are not covered – they are an additional part of volunteering.

There are a number of logistics that pilots doing outreach need to take into account that the average person does not have. He said you cannot accept money for theft, even if it only serves to cover the cost of this philanthropy.

“Angel Flight’s regulations actually prohibit pilots from accepting any kind of money for the flight, which I wouldn’t do anyway,” he said. “It’s not about making money. It’s about doing something for someone else.

Goerzen lives in an old farmhouse near Goessel that once belonged to his grandparents. His wife Laura is a pastor at the First Menonite Church of Christian in Moundridge. Together, they are raising three children aged three, 11 and 14.

“I try to teach children that everyone has gifts and that everyone’s gifts are different. But there are people who are less fortunate than us, ”said Goerzen. “If we pay attention, we will find that there are ways to use our gifts to help others. ”


Source link

]]>
https://greguti.com/former-student-john-goerzen-uses-personal-plane-to-help-others-the-sunflower/feed/ 0
Review the best Linux VPS provider: OperaVPS, Monovm, and Time4VPS https://greguti.com/review-the-best-linux-vps-provider-operavps-monovm-and-time4vps/ https://greguti.com/review-the-best-linux-vps-provider-operavps-monovm-and-time4vps/#respond Mon, 21 Jun 2021 00:00:23 +0000 https://greguti.com/review-the-best-linux-vps-provider-operavps-monovm-and-time4vps/

Linux VPS Virtual Private Server has powerful hardware that delivers incredible performance, speed and quality using various virtualizers; Linux VPS can run the most popular open source programs and modules.

The many advantages of using a Linux VPS include ease of use, increased security, improved reliability, and low cost. However, for most webmasters, programmers, designers, and web developers, the main benefit of a Linux virtual server is its flexibility.

Websites help many businesses or businesses that need to operate online to better present their products and services to an online audience. In this case, having a dedicated server is very necessary to upgrade your work Among the proprietary services in the market, popular options include proprietary Linux hosting, which most people choose for their Linux virtual server for its ease of use. ‘use ; It is difficult for many business owners to choose between Linux server vendors, OperaVPS, Monovm, and Time4VPS are among the best VPS providers that you can buy your favorite Linux VPS based on your needs.

OperaVPS.com

OperaVPS is our premier Linux VPS provider which will be an excellent choice for purchasing a Linux virtual server;

Inexpensive root access, SSD storage can be one of the most valuable features of Linux VPS, that the company OperaVPS, besides having these capabilities in the Linux virtual server, ensures a quick response when purchasing it. ‘a VPS, which is one of the main and the main reasons to buy a Linux virtual server will come from this company; That most of the companies will cause the dissatisfaction of the customers with the late response. Hence, OperaVPS may be one of your top priorities due to its quick response to its users.

let’s take a closer look:

SSD storage

very fast in responding to requests Thanks to the activation of SSD RAID 10, this gives your system optimal performance

When servers are connected to SSD, they have unmatched quality and up to four times more efficient and powerful than conventional virtual servers thanks to dedicated resources and high speed disks.

Several locations

Support for several different locations can be a plus when purchasing a Linux VPS; there are several types of locations that OperaVPS can support.

For example, favorite locations like UK, Netherlands, France, and Canada.

24 hour professional assistance

With professional OperaVPS support team and quick response, you don’t have to worry about server issues and get your issue resolved as soon as possible.

Platforms available

Debian, CentOS, Ubuntu.

Note: These platforms are among the major platforms and you can work with the OperaVPS support team to install other platforms.

Linux VPS Packages

OperaVPS believes that different projects require different technologies! Choose an inexpensive VPS hosting program that matches your current needs, then upgrade and scale as your project grows.

The price of OperaVPS Linux VPS products is priced from $ 7.99 to $ 27.99, depending on the features and facilities it offers, which may be acceptable.

Payment Methods: All popular online payment methods like credit card, Perfect Money, Bitcoin (BTC), and PayPal.

From the above, if you wish buy linux vps from OperaVPS you can click on this link: https://operavps.com/linux-vps/

Monovm.com

Review the best Linux VPS provider: OperaVPS, Monovm, and Time4VPS

Monovm is also one of the popular companies which can provide special and quality services for the sale of virtual servers; This vendor offers various Linux VPS at great prices and with quick Linux VPS setup and enterprise grade hardware that encourages you to make this business one of your top priorities when purchasing a virtual server.

let’s take a closer look:

Quick Linux VPS Setup

Providing and activating the service within 15 minutes of payment confirmation will be a feature when purchasing a Linux server from Monovm.

Company material

MonoVM Inexpensive Linux VPS Hosting runs on high-end Intel and Supermicro microprocessors with storage devices configured in RAID 10 arrays using enterprise RAID controllers with integrated SSD caching powered by MonoVM technicians . In addition, these servers are equipped with HDDs, SSDs and even high-speed NVME SSDs.

Full SSH access

All hosting software and control panels can be installed there depending on the plan you choose, and All Linux Server plans to include full root access with an SSH port; it will be a good reason to have a better Linux VPS

Momentary support

High quality support in Monovm, by an experienced team, is one of the features that can resolve any hardware or software issue in the shortest possible time.

Platforms available

Debian, CentOS, Ubuntu, Kali Linux, Arch Linux.

Linux VPS Packages

Considering the features available when purchasing a VPS, the base price for Linux VPS products is $ 5.99- $ 64.99, which is sure to be one of the most popular VPS providers in the world. because of its different plans.

Payment methods: multiple payment gateways like credit cards, Paypal, Perfect Money, Webmoney, Bitcoin (BTC)

You can click on the link opposite https://monovm.com/linux-vps/ to better see and buy Linux VPS from the company Monovm.

Time4VPS.com

Review the best Linux VPS provider: OperaVPS, Monovm, and Time4VPS

Another of our suggestions when purchasing a Linux virtual server is Time4VPS company, which will be one of the professional VPS providers due to its features available when purchasing a virtual server.

When it comes to diversity, Time4VPS comes first, the variety of packages available is far greater than the companies mentioned, but the only thing that sets Time4VPS apart is the lack of unlimited bandwidth.

Of course, due to its features, the lack of unlimited bandwidth is not very noticeable

let’s take a closer look:

Powerful hardware

Powerful HP ProLiant DL360 Gen10 servers built with multi-core Intel Xeon Gold 6132 processors (14 cores / 28 cores), a dedicated 4 Gbps network connection will be a major advantage with a 99.98% up-to-date server Your Linux VPS will be online and available 24/7

Customizable Linux kernel

You can easily do this whenever you want to add or remove a module from your Linux kernel! You have full access to the system, you can freely modify and customize your kernel to get more resources, you can also reduce memory usage; You will make resource management an easy experience

Assistance on demand

Managing a Linux VPS server can be tricky, especially at first. Time4VPS will offer assistance and advice in these first steps to configure the server of your dreams on VPS Linux. And if you need more advice, feel free to visit our knowledge base or ask a question in the community forums. If you can’t find an answer there, feel free to submit a ticket through customer support or contact support through live chat.

Resellers welcome

Additional tools to prevent unsolicited email spam, provide encryption and security for inbound and outbound third-party communications, or simply change your IP address to access desired content are some of the features that Time4VPS will provide. Spam filter, a Obtain an SSL certificate or a VPN. These tools can be used not only with VPS hosting services, but also with the services of other providers.

Platforms available

CentOS, Ubuntu, Kali Linux.

Linux VPS Packages

The packages you will see in Time4VPS will be the most diverse Linux VPS packages, but will also be a bit more expensive than the variety, with the base price of packages starting at 1.99EUR / month. Up to 128.99EUR / month they will be variable

Payment methods: Coinify, Stripe visa, Paypal, Alipay, Webmoney

You can refer to the link https://www.time4vps.com/linux-vps/ for more information on the plans available in Time4VPS and purchase a Linux virtual server.

ABSTRACT

The companies that have been introduced are selected from the top Google search results and are shown only because of the unique features of their VPS products; Buying from these sites is optional, so you can still use your old Windows VPS provider, but if you are not happy with your old provider and it’s low quality; You can make these 3 suppliers your top priority

Depending on your needs, buy the desired server from the supplier and use your server worry-free.


Source link

]]>
https://greguti.com/review-the-best-linux-vps-provider-operavps-monovm-and-time4vps/feed/ 0
Top Robotics Internships in India in 2021 https://greguti.com/top-robotics-internships-in-india-in-2021/ https://greguti.com/top-robotics-internships-in-india-in-2021/#respond Sun, 20 Jun 2021 07:01:42 +0000 https://greguti.com/top-robotics-internships-in-india-in-2021/


by Disha Sinha
June 20, 2021

Robotics has opened up a plethora of science job opportunities with prominent organizations with lucrative salaries. It is one of the most demanding jobs of recent years. While there is controversy that robots will take over from human jobs, robotics is guaranteed to be the future of industries. RPA has transformed the work environment by increasing productivity and helping human employees in factories or hazardous environments. RPA has created new professions such as robotic engineer, robotic technician, sales engineer, software developer, robotic operator and many more. Reputable companies need experienced professionals to work with RPA effectively and efficiently. Thus, there are many paid robotics internships available in the global market to train aspirants with hands-on training. Let’s look at some job postings for the best robotics internships in India in 2021.

Robotics internship at KaaShiv Infotech

Location: Chennai

KaaShiv Infotech is a ten-year-old company led by recognized experts in Microsoft, Google, Cisco and HCL technologies and recognized by Google as the premier internship company in Chennai. This company has the greatest confidence in people and technology. Thus, it provides specialized knowledge services to multiple industries with advanced solutions.

Responsibilities: There is real-time project development and technology training by eminent professionals. The training is fully hands-on with separate hardware and software machines and tools for the students. There are research and development projects with industrial exhibitions to improve company skills as well as RPA skills. Students should cover complete documentation and analysis of reports on time. The duration of the robotics internship is three hours per day of five days, 10 days or one to six months.

Click here for registration

R&D Engineering & Internship at Barnstorm Agtech

Location: Pune

Barnstorm Agtech is a Canadian startup funded by VC which has the research and development area in Pune. The workplace consists of a 14,600 high-ceilinged factory with a large head office of modern, well-appointed bungalows including large canopy outdoor spaces. It aims to build the most advanced “swarm farming” system of tractor-type semi-autonomous vehicles.

Responsibilities: The candidate is required for 3D graphics visualization software, Web-HTML5 dashboard software, multiple database synchronizations, robotic swarm, and programming of computer vision systems such as OpenCV and ROS. The candidate is also sought after for the physical design and control of the powertrain of hybrid electric vehicles, including battery management, spread spectrum data communication, sensor data acquisition and management, autonomous navigation , LiDAR, and actuated mechanisms such as robot arms, automated docking, and heavy load loading.

Conditions: The interested candidate should be any graduate of any specialization with sufficient knowledge of robotics, mechanical design, Python, LiDAR, HyperMesh, SQL, OpenCV, Linus, ROS, agriculture and 3D modeling .

Click here apply

Management Intern – RPA Support at Genpact

Location: Bangalore

Genpact is a global professional services company focused on turning business ideas into reality through digital innovation and smart operations. Genpact Cora combines end-to-end operations expertise and an AI-powered platform.

Responsibilities: The interested intern should take care of troubleshooting issues in the day-to-day operation of RA processes, provide timely RCA and solutions when required, design RPA solutions with principles and conventions, create and maintain documentation. The intern should also support existing processes and changes as part of the structured change control process, support monitoring of robots in daily operations with the version management system during the code life cycle of the robot. The candidate must also make the transition from RPA development and manage the end users of the robots.

Conditions: The candidate should have a clear understanding of the RPA Blue Prism tool, the business process from the workflow diagram with the practical configuration of automated processes in automation software products. The intern also needs certification in major RPA products and hands-on experience with VB scripts and service management tools such as JIRA and ServiceNow.

Click here apply

Robotics software intern at General Systems Pvt. Limit

Location: Gurgaon

General Systems Pvt. Limited is a Singapore-based startup focused on targeting the multi-billion dollar construction and mining market with a global customer base.

Conditions: The candidate must have solid programming knowledge in C ++, Python and Java with Linux or Kernels. There must be hands-on training on MATLAB, ROS, Rviz, Gazebo, Arduino, Raspberry PI, and LabVIEW. There must be sufficient knowledge of actuators, stepper motors, servomotors, frequency converters, motor controllers and motion controllers.

Click here apply

Mechanical, Robotics or Mechatronics Engineer internship at Om Siddh Vinayak Impex Pvt. Ltd.

Location: Gujarat

Om Siddh Vinayak Impex Pvt. Ltd. is a textile company focused on recycling societal waste and putting it in the hands of the individual all over the world. It is the fastest growing retail company handling exports to consistent quality standards in Africa, Asia, Central America and South America. The company has two separate warehouses in India to classify used clothing from the United States and Japan.

Responsibilities: The intern will work on automation projects in a real industrial workplace dealing with mechanics, robotics and mechatronics.

Conditions: The intern should have sufficient knowledge of Raspberry Pi, Aruduini, ESP or other microcontrollers, image processing AI and programming languages ​​such as Python.

Click here apply

Robotics training internship at Incrediminds

Location: Gurgaon

Unbelievers is the tailor-made platform that offers individual online robotics and coding lessons for all age groups. It has experienced trainers who share their practical experiences with sufficient knowledge. The learning methodology is known as the learning-assessment-application test which provides a comprehensive opportunity to learn while completing RPA projects. The company currently has 1,700 students in ten countries with 500 projects.

Responsibilities: The intern is expected to create robotics projects, conduct automation lessons for school students, and manage documentation for these robotics projects.

Conditions: The intern should have sufficient knowledge of Microsoft Office, robotics, Arduino and programming languages ​​such as Python, C ++ or Java.

Click here apply

Share this article

Share


Source link

]]>
https://greguti.com/top-robotics-internships-in-india-in-2021/feed/ 0
10 best bioinformatics jobs and companies hiring right now https://greguti.com/10-best-bioinformatics-jobs-and-companies-hiring-right-now/ https://greguti.com/10-best-bioinformatics-jobs-and-companies-hiring-right-now/#respond Thu, 17 Jun 2021 19:20:53 +0000 https://greguti.com/10-best-bioinformatics-jobs-and-companies-hiring-right-now/

The demand for the bioinformatics industry continues to increase. Thus, many companies are opening their doors to more jobs in bioinformatics.

The real question is, which companies to apply for and who is currently hiring?

RELATED: How To Be Friendlier In An Interview

10 bioinformatics companies hiring now

1. DPP

PPD is a biotechnology organization that focuses on new technology research, drug development, and laboratory management services, among others.

This company has more than 10,000 employees and is currently looking for Programmer Analysts in Bioinformatics.

The requirement is at least a bachelor’s degree in the scientific, technical or quantitative field. As well as at least 3 years of experience in SAS programming, project management and quality control support.

2. Amyris, Inc.

Amyris is a biotechnology and renewable chemicals company focused on flavors, fragrances, ingredients for cosmetics and pharmaceuticals.

They are currently accepting internships in bioinformatics while working remotely. Interns will work in the Department of Bioinformatics and Software Engineering where they will be called upon to work on quantitative issues for the company’s research and development efforts in the design and construction of rational strains and the exploration of genotype / data. phenotype.

3. Leidos Biomedical Research, Inc.

Leidos is a biomedical research company that strives to find possible treatments for cancer and AIDS.

The company is seeking bioinformatics scientists to work on their public health projects with experience in Next Generation Sequencing (NGS) data analysis and phylogenetic analyzes of viral, bacterial, fungal and protist genomes, transcriptomes and metagenomes in their Atlanta, GA office.

Requirements for this position include a master’s or doctoral degree. in bioinformatics, genomics, molecular biology, genetics or in a related field. As well as 3 years of experience in bioinformatics.

4. United States Department of Health and Human Services

The US Department of Health and Human Services is looking for a biologist in its bioinformatics and computational biosciences branch to design new models, practices and technologies for NIAID clients, which is also part of their efforts. scientists.

The GS-13 level in the Federal Service is a requirement for this position. At the same time, a bachelor’s degree in biological sciences, agriculture, natural resource management, chemistry or related disciplines is required.

5. Grand Institute

Launched in 2004 by MIT and Harvard, the Broad Institute is a research institute aimed at better understanding human biology and the treatment of human disease using genomics.

Among other jobs, The Broad is looking for a Laboratory Operations Specialist who can handle laboratory maintenance, documentation and reports, and assist with training, among other things. Laboratory operations include organizing the freezer, taking samples and monitoring workflows.

The requirements for this job are at least 1-3 experience in the field and a BS or BA in a related field.

RELATED: 10 Resume Mistakes You Should Avoid At All Costs

6. Yale University

Yale University is seeking a Bioinformatics Research and Education Librarian.

Part of the responsibilities consists of liaising between the library and the research center and clinical departments, in coordination with students, staff, professors, post-docs and researchers.

7. University of Michigan

The Associate Researcher position at the University of Michigan involves working for Dr. Steve Parker in the departments of Computational Medicine and Bioinformatics and Human Genetics.

The general specifications of the job are to generate mechanistic knowledge about how disease susceptibility is encoded in the non-coding part of the genome.

The requirements for this job are a doctorate. or near completion in a relevant field. Programming is required as well as strong skills in molecular biology.

8. Ohio State University

Ohio State University is seeking a bioinformatician for its administration of microbiology of the arts and sciences, who will work at the Bradley Lab and with members of the Institute of Infectious Diseases.

Part of the job will be to research, develop and support software for microbial data analysis. In addition to working with high performance computing clusters in a Unix / Linux environment.

As for the requirement for this bioinformatics job, a master’s degree in computer science, along with extensive laboratory experience, is highly required.

9. Duke University

The Department of Biostatistics and Bioinformatics at Duke University is looking for an open faculty at all levels.

Part of the faculty’s job is to help research methods to improve health equity. It is also necessary to have knowledge of machine learning from electronic health records and / or socioeconomic determinants of health.

10. Genentech, Inc.

Genetech is a biotechnology company that does extensive research and development. They are a subsidiary of Roche and their head office is located in South San Francisco, California.

Genetech is looking for interested candidates for their Drug Development Training Program (DDTP) in Developmental Sciences starting in September.

The educational requirement for this job is at least a nearly completed doctorate.

The disciplines required include biology, biochemistry, biomedical engineering, biomedical sciences, cell biology, chemistry, chemical engineering, data science / bioinformatics, microbiology, pharmaceutical sciences, pharmacology, toxicology or d ‘other related disciplines.

How to get a job in bioinformatics?

Getting a job in bioinformatics may require specific and technical skills. An example may be having a broad knowledge of rapid prototyping language if you are aiming for a bioinformatician position. Some positions may require a master’s or doctoral degree, depending on the severity of the job.

When it comes to where to apply, online sites like Glassdoor or Simply Hired are great sources of bioinformatics jobs. This can range from private sector work to universities, or even government offices.

What positions can you get?

Works ranging from analysts, assistant professors, bioinformaticians or researchers can be found on many websites across the web. In this article, we’ve listed the top 10 companies you can work for in this field.

Extensive laboratory experience is required for some positions while others may require a master’s or doctoral degree. – it all depends on the job you are looking for.

Conclusion

From private institutions to government positions to universities, the search for bioinformatics jobs is abundant. Who knows, landing the job of your dreams might be just one click away from those listed above.

Do you have any experience working with the companies listed above? Let us know your comments.


Source link

]]>
https://greguti.com/10-best-bioinformatics-jobs-and-companies-hiring-right-now/feed/ 0
Why I can’t wait for improved vertical display support on Chromebooks https://greguti.com/why-i-cant-wait-for-improved-vertical-display-support-on-chromebooks/ https://greguti.com/why-i-cant-wait-for-improved-vertical-display-support-on-chromebooks/#respond Thu, 17 Jun 2021 16:02:10 +0000 https://greguti.com/why-i-cant-wait-for-improved-vertical-display-support-on-chromebooks/

Earlier this week, Android Police noticed some upcoming changes in Chrome OS that will allow Chromebooks to support vertical displays. I suspect most of you don’t care but I do. I can’t wait for this feature and aside from explaining why it matters to me, I’ll illustrate a use case that you’ll benefit from as well.

First, the news: According to code commits, future versions of Chrome OS will let you dock or snap windows to the top or bottom of a display.

Currently, the snap feature only allows windows or tabs to be on the left or right of your screen. And that makes sense for a traditional Chromebook display.

But, when using an external monitor that can rotate from landscape to portrait mode, snapping apps to the top or bottom is super useful.

From my perspective and use cases, this will be a godsend for coding. I plan to use my Chromebook for a Masters in Computer Science program starting this fall and I already have a rotating monitor.

Using it in portrait mode specifically for coding is vastly superior to viewing code in landscape mode. I can fit much more of the programming text on the screen.

And that’s helpful when tracking down bugs or seeing how different functions work together. With more vertical real estate and the supported snapping, I can even have a debugging window or some other tool open underneath the code.

Coding on a Chromebook
I need more up and down space!

Of course, we don’t all do programming on our Chromebooks. But we probably all read website content, yes?

An improved vertical display Chromebook experience will let you reap the benefits to see more of that content without scrolling. Most text on the web is really set up for this viewing experience anyway.

Clearly it’s that way on phones, for example. And most news sites, blogs or other similar web properties are too.

You’ll typically see extra space on the left and right of web content in these cases with a full screen wide monitor for example. On a vertical monitor using a Chromebook or other device, you won’t see that extra space. Instead you’ll see more actual content.

Obviously, if you don’t have an external monitor, let alone one that rotates, you probably don’t care about this upcoming feature. If you do though, you’ll likely be happy to see this functionality arrrive.


Source link

]]>
https://greguti.com/why-i-cant-wait-for-improved-vertical-display-support-on-chromebooks/feed/ 0
Raspberry Pi 4 Model B Review: High-Performance, Flexible and Affordable DIY Computing Platform Review https://greguti.com/raspberry-pi-4-model-b-review-high-performance-flexible-and-affordable-diy-computing-platform-review/ https://greguti.com/raspberry-pi-4-model-b-review-high-performance-flexible-and-affordable-diy-computing-platform-review/#respond Wed, 16 Jun 2021 06:17:24 +0000 https://greguti.com/raspberry-pi-4-model-b-review-high-performance-flexible-and-affordable-diy-computing-platform-review/

The· Raspberry Pi Foundation We aim to bring IT tools to people around the world by making hardware and software accessible to the masses using low cost single board computers. It’s a model reminiscent of the early days of personal computers, with cheap, easy to program and affordable hardware. These computers have influenced generations and the Raspberry Pi wants to do the same today.

Latest generation devices Raspberry pie 4 A series with a significantly improved processor and up to 8 GB of RAM. The Pi 4 has the same set of 40 GPIO pins for working with your own or third-party hardware, as well as a set of USB 2 and USB 3 ports and a pair of micro HDMI video outputs. Powered by USB-C, it features 802.11ac Wi-Fi and Gigabit Ethernet for connectivity. Raspberry Pi recently offered a Power-over-Ethernet (PoE) option. This is ideal for IoT projects where you can drop a Pi at the end of an Ethernet cable.

Installation and configuration

Getting started with your Raspberry Pi is easy. Pi 4 can be purchased from various vendors starting at £ 33.90. 2 GB From Device ($ 35 in US) to £ 73.50 8 GB Optional ($ 75 in US). If you just want to buy a bareboard, you’ll need a power supply and a MicroSD card to get started. Setup also requires a micro HDMI cable, keyboard, and mouse (unless you’re controlling your Pi remotely from your PC). Alternatively, you can purchase a basic kit and include much of what you need to get started. Starter Kit It arrives for £ 58 (US $ 68.20) and has more features Desk kit £ 116 with keyboard, mouse and case ($ 120 in US).

i use Logitech Wireless USB Multimedia Keyboard With a built-in trackpad, you don’t have to worry about cables. Pi 4 also works with Bluetooth devices (Bluetooth 5.0 and BLE are also supported), providing plenty of keyboard and mouse options. Other connections include a camera port and a display connector for an integrated display, both for DIY IoT projects.

Configure the boot SSD using the Raspberry Pi imager.

Image: Simon Bisson / ZDNet

Storage selection

Although the use of MicroSD cards may be limited, schools and code clubs can easily obtain a low-cost set of images that can be given to each student and replaced in the event of failure or loss. You can start with just 8GB of storage, but in practice 16GB or 32GB is a better option, and the Raspberry PiOS will automatically resize the partition to accommodate any storage size.

The initial firmware version on Pi 4 did not support working with USB SSD drives which relied on a MicroSD card. These are cheap solutions, but they are unreliable and if you do not back up your system regularly, you run the risk of losing data if the device crashes. MicroSD cards are not designed for PC workloads and can easily exceed write cycles.

Best ZDNET exam

Introduction of USB boot support Things have changed dramatically and now you can use a PC database to write a boot image to an SSD drive. Raspberry Pie Imager, Ready to use. If you are using an older Raspberry Pi 4, you may need to enable USB boot support using a console-based configuration tool, which should be enabled by default on newer hardware.

Raspberry-pi-choose-os.jpg

Selection of the operating system on the Raspberry Pi imager.

Image: Simon Bisson / ZDNet

Performance: choose the right Pi for your job

Performance is sufficient in most cases, and you can run most software without a problem, but you shouldn’t specialize in performance. The quad-core ARM Cortex A72 SoC was clocked at 1.5GHz and in our testing, the 8GB Pi 4 achieved a single-core score of 183 and a multi-core score of 576 in the ARMv7 beta. Geekbench 5..Low range in comparison Surface Go Got 357 and 906 respectively.

I have used two different Pi4s for different purposes. The 8 GB system is used as a low-cost Linux desktop computer using the built-in HDMI to drive a monitor with a wireless mouse and keyboard. The other is ADS-B Receiver I use a custom Linux used to develop and manage open source projects that I maintain. This is probably the key value of the flexibility of the Raspberry Pi.

It is a powerful platform to create your own hardware and software. GPIO ports allow you to expand your device with your own third-party hardware, commonly referred to as HAT (Hardware Attached on Top). You can start with the raspberry pie itself. Raspberry Pi operating system, A customized version of Debian, with a preconfigured list of basic programming tools and recommended teaching and programming tools. Enough to start, the Raspberry PiOS is ARM Linux, so you can install any ARMv7 binary. The Pi desktop environment includes a software installer. You can also add a new repository and install the software from the command line.

One of the key issues is heat, especially if you plan to run a Linux desktop. It is recommended to use a fan or heatsink case to cool the Pi 4 processor as it can get hot under load. All devices will be placed in Argon 40 Argon A case.. They provide a software-controlled fan and user-configurable power button, and use an extension cable to move the HDMI ports so that all Pi ports are on the back of the case. This makes cable management a lot easier, especially if you’re driving two monitors using both HDMI ports.

raspberry-pi-os-desktop.jpg

Raspberry PiOS desktop.

Image: Simon Bisson / ZDNet

Conclusion

The Raspberry Pi’s 4 shows that it will focus on software for the remainder of 2021, so it will be the Foundation’s flagship single-board computer for at least a year. This will keep the Raspberry Pi operating system mature and updated regularly. The firmware of the device and the Linux operating system. The operating system remains 32-bit, but you can install an alternative that fully supports 64-bit, like Ubuntu, but I would probably recommend using an 8GB Pi here.

Nothing beats the Raspberry Pi as an educational and introductory computer. With each new release, we’ve been able to add more features and support the same interface on third-party hardware without drastically resizing the map. As someone who grew up on 8-bit and 16-bit computers in the 1980s, these devices have a certain nostalgia, but they also look forward to hardware, built on that legacy, and more. We aim to encourage many developers and engineers.

in the case of Sinclair Specter And that BBCB tassel Machine educated by the generation, the Raspberry Pi 4 is obviously hardware aimed at doing the same for the next generation. With the Raspberry Pi 4, there’s a good chance you’ll be successful. A board to buy if you need a low cost PC for programming or IoT projects, or if you want your kids to learn the basics of computers, like Python, from Minecraft.

Recent related content

Raspberry Pi: After launching 5 devices in a year, here’s what to do next:

How the Raspberry Pi made it hard for an astronaut to sleep on the ISS

Raspberry Pi 4: How to create a Twitter bot to track planes passing overhead

The Imager tool on the Raspberry Pi added these new options

Best Raspberry Pi 2021 Alternative: Top SBC

Boot the Raspberry Pi 4 from a USB device

Meet the official Raspberry Pi 4 case fan

Home computer: 100 icons and book reviews that define the digital generation

Read more reviews

Raspberry Pi 4 Model B Review: High-Performance, Flexible and Affordable DIY Computing Platform Review

Source link Raspberry Pi 4 Model B Review: High-Performance, Flexible and Affordable DIY Computing Platform Review


Source link

]]>
https://greguti.com/raspberry-pi-4-model-b-review-high-performance-flexible-and-affordable-diy-computing-platform-review/feed/ 0
TimeCache aims to block side channel cache attacks – without compromising performance • The Register https://greguti.com/timecache-aims-to-block-side-channel-cache-attacks-without-compromising-performance-the-register/ https://greguti.com/timecache-aims-to-block-side-channel-cache-attacks-without-compromising-performance-the-register/#respond Tue, 15 Jun 2021 13:45:00 +0000 https://greguti.com/timecache-aims-to-block-side-channel-cache-attacks-without-compromising-performance-the-register/

Researchers at the University of Rochester have created TimeCache, an approach to system security that is supposed to protect against side channel attacks like evict + reload and Specter, without the usual deleterious impact on performance.

2018 hasn’t been a good year for chipmakers or their users, with the unveiling of a range of new attacks on even design changes introduced to improve performance over the years. Known as Meltdown and Specter, the vulnerabilities exist in the processors themselves and allow processes to access or infer the contents of memory used by other processes, accessing everything from passwords to keys. cryptographic.

While the vulnerabilities themselves are hardware-based, the fixes have arrived in the form of firmware and software. Unfortunately, but not surprisingly given the vulnerability in hardware designed to improve performance, this means that they can slow down some workloads. Even after the patches, new vulnerabilities exploited by side channel attacks are constantly being discovered, including a proof of concept attack released by Google in March of this year.

The creators of TimeCache – Divya Ojha and Sandhya Dwarkadas of the Department of Computer Science at the University of Rochester – say this could be the answer. Targeting the specific subclass of side-channel cache attacks carried out by shared software, TimeCache is supposed to offer flawless protection with minimal impact on performance, while retaining all the benefits of sharing things in the first place.

Performance and protection, finally together

“Our defense against synchronization side channels through shared software retains the benefits of allowing processes to use the full cache capacity of a shared cache,” the pair explained in an article presented at the 48th Annual ACM International Symposium / IEEE on Computer Architecture (ISCA). “[It] helps reduce cache and memory pressure with data deduplication and copy-on-write sharing. “

The basic concept behind TimeCache is that it incorporates knowledge of previous cache line access, so that a given process’s first access to the cache is delayed. This means that it is not possible to infer whether another process running on the same system first requested the same data.

“We are achieving our goal,” the researchers explained, “by implementing per-process cache line visibility so that processes do not benefit from cached data brought in by another process until they are have suffered a corresponding failure penalty. The solution works at all levels. cache levels without having to limit the number of security domains, and defends itself against a malicious process running on the same core, on another hyper-thread, or on a different core. “

Hardware modifications required

There is a catch, of course. TimeCache cannot be implemented only in software and requires hardware modifications with the addition of a security bit per cache line, per hardware context, called “s-bit”; one timestamp per cache line; a shift register; and a binary serial timestamp comparison logic block with mapping gate and bit line devices to speed up the timestamp comparison.

These changes add their own overhead, which could be felt in the very environments where shared software security is most valuable: the data center.

“The total number of bits s [required] may be important to LLC [Last Level Cache] in server-class processors, “the team admitted, pointing to methods already in use to scale consistency directories that could be applied to TimeCache to” reduce area overload to O (log (n)) as opposed to m bits per cache line. “

To prove the concept, the researchers tested TimeCache using the gem5 architecture simulator. The system was found to be able to defend against apparent information leaks during a microbenchmark test, an active attack against the GnuPG version of the RSA security algorithm using the flush + reload attack, which worked at the same time. both on real hardware and in simulation without TimeCache, but was stuck during its implementation. This demonstrated that its implementation did not add any additional secondary channels that could be exploited.

The impact on performance was negligible. Testing with the SPEC2006 and PARSEC benchmarks revealed an average overhead of only 1.13%, a small impact on what appears to be full protection against a whole class of attacks.

Push TimeCache to users

The TimeCache approach can end up showing up in free and open source architectures before anyone using proprietary chips is sniffed at.

“To fully assess changes in processor design, researchers need access to simulation environments at different levels of abstraction, as well as to full RTL sources,” said Stefan Wallentowitz, professor at the University of applied sciences from Munich and director of RISC-V and the non-profit association. The FOSSi Foundation dedicated to building the community and ecosystem surrounding free and open source silicon, in response to the article.

“Unlike proprietary alternatives, free and open source silicon and the RISC-V ecosystem today provide many of these components, allowing researchers to evaluate their ideas across all layers of abstraction, and even to share. their implementations for others to replicate and build upon, “he added.

“In terms of protecting application developers from Specter-type vulnerabilities, the best approach is to ignore developer protection,” said Sean Wright, application security expert, of the potential impact of the TimeCache technique. “By that I mean build protection into the framework, the operating system, or have libraries that are easy to use.”

He continued, “Developers are often under pressure to deliver new functionality and don’t have the time to spend trying to implement some form of security protection mechanism, let alone having the sufficient knowledge. to do it.

More details on TimeCache, which can be used with other defenses, including cache randomization, can be found in this PDF copy of the article: “TimeCache: Use time to eliminate cache side channels when sharing software.” ®


Source link

]]>
https://greguti.com/timecache-aims-to-block-side-channel-cache-attacks-without-compromising-performance-the-register/feed/ 0
How to create a CI / CD pipeline with Azure and GitHub https://greguti.com/how-to-create-a-ci-cd-pipeline-with-azure-and-github/ https://greguti.com/how-to-create-a-ci-cd-pipeline-with-azure-and-github/#respond Mon, 14 Jun 2021 18:56:15 +0000 https://greguti.com/how-to-create-a-ci-cd-pipeline-with-azure-and-github/

Continuous integration/continuous delivery Pipelines (CI / CD) form the basis of modern software development, delivering business value in several ways: streamlined versions, software with fewer bugs, and automation that removes many tedious and error-prone manual steps. You can develop software without CI / CD, but a CI / CD pipeline makes your life easier if your business uses Agile, cloud native applications, or any distributed application architecture such as microservices.

There is no one-size-fits-all approach to building a CI / CD pipeline. A typical Example of a CI / CD pipeline relies on certain normative components – a CI engine, a code repository, a test framework – but an organization’s CI / CD plan will likely diversify, depending on its infrastructure and tools and the choice between continuous delivery or deployment. Organizations that rely on cloud-based applications and services may wish to run CI / CD through the chosen cloud platform.

Most cloud providers offer a template for quickly setting up a CI / CD pipeline. Consider, for example, a CI / CD pipeline that uses GitHub and deploys an application to Microsoft Azure. A service called DevOps Starter provides CI / CD starter templates that allow you to integrate either GitHub or Azure DevOps and bootstrap a repository with a demo application from source code with pipeline definition as code and infrastructure as code (Azure Resource Manager templates). Once these are in place, the service pulls everything together to present an end-to-end deployment pipeline.

So let’s start with this walkthrough on how to set up CI / CD pipeline in Azure with GitHub for version control and repository. First, go to the Azure portal, search devops and select the “DevOps Starter” service.

Get started with the DevOps Starter service.

In the DevOps Starter Service pane, click Add which features a bunch of preloaded templates to choose from. Select the programming language for your app – in this example I chose .NET, as well as GitHub for version control and GitHub Actions for pipelines (Azure DevOps is also an option). Hit it Next: Frame button.

DevOps Starter for Azure, choice of language and CI / CD components
Choose CI / CD templates, language, version control, and pipeline destination.

Now choose the ASP.NET Core framework for our .NET application because it is a cross-platform framework; later we can deploy our application on linux containers. Then click Next: Service.

Choose a frame
Choose your CI / CD infrastructure taking into account your deployment plans.

You can choose from several Azure services to deploy your application. For this exercise, we’ll choose Web App for Containers, which deploys containers to an Azure App Service, a managed application hosting platform. Click on Next: Create.

Choose specific cloud services for CI / CD deployment
Choose your services for deployment on Linux, Windows or containers.

Since we have chosen GitHub for version control and pipelines, the next step will prompt us to authorize the connection to GitHub in the Azure portal. Click on To allow and log into GitHub.

Log in to GitHub
Allow connection to GitHub for version control.

Once we have authorized GitHub, we are presented with several parameters (the last three are specific to deployment to Azure):

  • Organization: GitHub organization to use
  • Deposit: specify the name of the repository (Azure does not generate a name for you)
  • Subscription: Azure subscription for hosting cloud resources
  • Web application name: Unique name where the application will be hosted
  • Location: A specific Azure region where the required cloud resources will be created

Fill in the details and click Revise + Create.

Settings to connect GitHub to Azure resources
Fill in the settings to connect the GitHub repository to Azure resources and the hosted region.

Once the deployment is complete, we can go back to the DevOps service and see that our instance has been created.

Confirm Azure CI / CD deployment configuration
Confirm that the GitHub and Azure deployment configuration is created.

Clicking on the instance presents a landing page with details of the GitHub workflow and corresponding Azure resources.

GitHub and Azure resources confirmed
GitHub workflow and Azure resource creation details.

We confirm that the Azure Starter Service has created a GitHub repository under our specified organization, and it has set up a GitHub Actions CI / CD pipeline to build, test, and release our software. Here is the layout of the repository:

  • .github / workflows: Folder that contains the pipeline definition
  • Application: File containing the source code of the application, unit tests and functional tests
  • Arm models: Folder with infrastructure as code definitions to deploy to our cloud provider (Azure)
Repository created for an Azure-based CI / CD pipeline
Repository connections created for the GitHub-Azure CI / CD pipeline.

The image below shows the CI / CD task as it is executed in GitHub actions.

Configuring GitHub actions
Configure GitHub actions for the CI / CD pipeline.

Build, deploy and test in a CI / CD pipeline for Azure and GitHub

Let’s dive deeper to understand what happens inside a single CI / CD build in our example and its three stages.

Construction stage

This task performs a series of steps to create the application:

  • See the source code for the master branch.
  • Authenticate with Azure (or the cloud provider of your choice). GitHub Actions supports major cloud providers immediately.
  • Create and run unit tests using the dotnet command line interface (CLI). In this example, the test framework is MSTest, but the dotnet CLI can work with different test frameworks and run them the same way. You can also use another build tool for your .NET applications, such as Cake for GitHub actions, instead of the dotnet CLI.
  • Use the Azure Resource Manager (ARM) templates to deploy the required container registry. Since we use Azure to deploy the container registry and as a managed platform to host our application, we use ARM models to deploy our infrastructure in a idempotent way. For CI / CD with other cloud providers like AWS or Google Cloud Provider, we could use AWS CloudFormation or Terraform templates, for example.
  • Get the container registry credentials from the cloud provider and authenticate with them.
  • Create the container image and upload it to the container registry.

Deploy step

The primary responsibility at this step is to take the artifact (the container image from the previous task) and deploy it to the environment on the cloud provider, with the required cloud resources, to host the instance of application. Specific functions include:

  • Discover the master branch
  • Authentication with the cloud provider
  • Use ARM Templates to Deploy the Required Cloud App Service Instance
  • Retrieve the identifiers of the container registry
  • Deploy the container image published in the container registry to the App Service instance

Functional test step

This job runs various functional tests to validate that our deployed application is working. The steps include:

  • Discover the master branch
  • Configure the .NET version
  • Use PowerShell to update runtime settings to point to our app instance
  • Run Function Tests Using Dotnet CLI

This example CI / CD pipeline in Azure with GitHub involves only one environment, but you can chain multiple environments together, with doors on them to control the release of deployed software.


Source link

]]>
https://greguti.com/how-to-create-a-ci-cd-pipeline-with-azure-and-github/feed/ 0
Wasmtime 0.28 is released with C ++ integration support https://greguti.com/wasmtime-0-28-is-released-with-c-integration-support/ https://greguti.com/wasmtime-0-28-is-released-with-c-integration-support/#respond Sun, 13 Jun 2021 09:51:00 +0000 https://greguti.com/wasmtime-0-28-is-released-with-c-integration-support/

In addition to recent WebAssembly project versions of Wasmer 2.0-rc and WASM3 v0.5, the Bytecode Alliance made up of Intel and Mozilla and other organizations announced Wasmtime 0.28.

Wasmtime is the work of the Bytecode Alliance which was formed in 2019 with the idea of ​​being able to run WebAssembly anywhere. Their main focus has been on Wasmtime as a stand-alone JIT-style WebAssembly runtime. Along with Wasmtime, this led to their Cranelift code generator as a target independent IR which is translated into executable machine code and also written in the Rust programming language.

New this week on the Wasmtime front line is the release of Wasmtime 0.28. With this new version, they have redesigned the integration API of the project. This redesigned integration API should have a better implementation for Rust users and easier memory management. There is also now a C ++ integration of Wasmtime via the wasmtime-cpp codebase. Wasmtime’s integration interface enables integration of WebAssembly support into applications of other programming languages. Wasmtime supports integrating WebAssembly support into applications written in Rust, C, Python, .NET, Go, Bash, and now C ++ as well. There are also other unofficial language bindings / APIs for other languages.

Besides the integration API work and the new C ++ integration implementation, there are also other API changes in this new version and other low level code improvements.

Wasmtime 0.28 for those interested in this WebAssembly runtime can be found at GitHub.


Source link

]]>
https://greguti.com/wasmtime-0-28-is-released-with-c-integration-support/feed/ 0
Intel’s ISPC Compiler Adds Support for Alder Lake + Sapphire Rapids and Apple Arm Chips https://greguti.com/intels-ispc-compiler-adds-support-for-alder-lake-sapphire-rapids-and-apple-arm-chips/ https://greguti.com/intels-ispc-compiler-adds-support-for-alder-lake-sapphire-rapids-and-apple-arm-chips/#respond Sat, 12 Jun 2021 11:05:00 +0000 https://greguti.com/intels-ispc-compiler-adds-support-for-alder-lake-sapphire-rapids-and-apple-arm-chips/

On Friday afternoon, Intel released a new version of its ISPC compiler, the SPMD implicit program compiler, which supports a variant of the C programming language with extensions around single-program and multi-data programming for the execution of the processor and GPU. This release not only prepares support for the next Intel processors, but also adds support now for Apple’s Arm processors.

While this C-based SPMD programming language and compiler is tailored to the Intel architecture and leverages performance especially with SSE and AVX vectorization, the new ISPC version 1.16 adds support for Apple’s Arm chips. . Processor definitions have been added for Apple’s Arm chips dating back to the A7. Additionally, support for macOS ARM targets has also been added in this release. Because the ISPC compiler is based on the LLVM compiler stack, adding support for Arm isn’t much of a challenge, but it will be interesting to see how well this SPMD programming compiler can work for Arm.

In addition to the implementation of Apple Arm, the ISPC 1.16 compiler also supports the upcoming Intel Alder Lake and Sapphire Rapids processors. GPU support for ISPC remains in beta for ISPC 1.16, which in turn leverages a patched version of LLVM 12. On the GPU side, ISPC 1.16 adds initial multi-GPU support and support for unified shared memory as well as preliminary Windows support. for GPU computation.

Also notable with ISPC 1.16 is the language that now allows direct calling of LLVM intrinsics from the ISPC source. By being able to call LLVM intrinsics directly from code, this should allow better performance tuning for areas where the new hardware instructions are not yet used by the ISPC standard library. ISPC 1.16 also adds a presume () optimization success to communicate code assumptions to the optimizer.

More details on Intel ISPC compiler version 1.16 via GitHub that beyond the open source code licensed under the BSD, there are also reference binaries for Linux, macOS and Windows.


Source link

]]>
https://greguti.com/intels-ispc-compiler-adds-support-for-alder-lake-sapphire-rapids-and-apple-arm-chips/feed/ 0