Linux Servers – Greguti Tue, 28 Jun 2022 03:17:44 +0000 en-US hourly 1 Linux Servers – Greguti 32 32 OpenSSL 3.0.5 awaits release to fix potential security flaw • The Register Mon, 27 Jun 2022 23:30:00 +0000

The latest version of OpenSSL v3, a widely used open source library for secure networking using the Transport Layer Security (TLS) protocol, contains a memory corruption vulnerability that compromises x64 systems with advanced vector extensions 512 (AVX512) from Intel.

OpenSSL 3.0.4 was released on June 21 to address a command injection vulnerability (CVE-2022-2068) which was not fully resolved with a previous patch (CVE-2022-1292).

But this version itself still needs to be fixed. OpenSSL 3.0.4 “is susceptible to remote memory corruption that can be trivially triggered by an attacker”, according to a security researcher Guido Vranken. We imagine two devices establishing a secure connection between them using OpenSSL and this flaw being exploited to execute arbitrary malicious code on one of them.

Vranken said that if this bug could be exploited remotely – and it’s not certain that it can – it could be more serious than Heartbleed, at least from a purely technical standpoint.

However, Vranken notes several mitigating factors, including the continued use of the 1.1.1 tree of the library rather than the v3 tree; the fork of libssl in LibreSSL and BoringSSL; the short time that 3.0.4 has been available; and the fact that the error only affects x64 with AVX512 – available on some Intel chips released between 2016 and early 2022.

Intel this year started to disable AVX512 support on Alder Lake, its 12th Gen Intel Core processors.

insect, an AVX512 specific buffer overflow, was reported six days ago. It was fixedbut OpenSSL 3.0.5 is not out yet.

Meanwhile, Linux distributions like Gentoo have not yet deployed OpenSSL 3.0.4 as a result of this bug and a test build failure bug. So that they include OpenSSL 3.0.3with its command injection defect.

In the GitHub issues thread Discussing the bug, Tomáš Mráz, software developer at the OpenSSL Foundation, asserts that the bug should not be classified as a security vulnerability.

“I don’t think it’s a security breach,” he said. said. “It’s just a serious bug [the] Version 3.0.4 is unusable on AVX512 compatible machines.”

Xi Ruoyao, a doctoral student at Xidian University, also said he disagreed with the policy of calling every heap buffer overflow a security hole. Vim, he said, started doing this this year and the result was something like ten “high-severity” vim CVEs every month without any proof-of-concept exploit code.

“I think we shouldn’t mark a bug as a ‘security vulnerability’ unless we have evidence showing it can (or at least can) be exploited,” he wrote, adding that however, version 3.0.5 should be released as soon as possible. because it is very harsh.

Alex Gaynor, software resilience engineer at US Digital Service, argues otherwise, however.

“I’m not sure I understand how this isn’t a security flaw,” replied Gaynor. “This is a buffer overflow that can be triggered by things like RSA signatures, which can easily happen in remote contexts (e.g. a TLS handshake).”

Gaynor urged releasing the patch quickly. “I believe this issue is rated CRITICAL in the OpenSSL Vulnerability Severity Policy, and it effectively makes it impossible for users to upgrade to 3.0.4 to get its security patches,” he said. declared. said ®.

PrivadoVPN Review: Disrupting the Market? Fri, 24 Jun 2022 10:00:39 +0000



  • 1 – Absolute hot waste
  • 2 – A kind of lukewarm waste
  • 3 – Severely flawed design
  • 4 – Some advantages, many disadvantages
  • 5 – Acceptably imperfect
  • 6 – Good enough to buy on sale
  • 7 – Great, but not best in class
  • 8 – Fantastic, with some footnotes
  • 9 – Shut up and take my money
  • 10 – Absolute Design Nirvana



PrivateVPN is a relatively new player in the VPN market. It presents itself as a complete solution, offering security, privacy and the ability to stream anything from anywhere. In this PrivadoVPN review, we will put these claims to the test.

Here’s what we like

  • Generous free plan
  • Best for Netflix
  • Easy to use

And what we don’t do

  • Unreliable speeds
  • Not many features
  • Small interface

The short version is that PrivadoVPN can do a lot of what it claims, but never quite as well as you’d like, with the exception of switching to Netflix, which it does pretty well. While there’s a lot to like here, most of the providers in our roundup of the best VPNs do a better job, and I have a feeling PrivadoVPN’s main draw will be its generous free plan, which gives you 10 GB of bandwidth per month.

To note: We tested PrivadoVPN on a virtual machine running the Windows. It also offers facilities for Mac and linuxas good as android, iPhone, iPad, routersand smart tvs.

PrivadoVPN Free and Paid Plans

When compared to other reputable free VPNs – not that there are many, mind you – 10GB of data is generous. Only Wind Scribe offers as much. All you have to do to access the free plan is sign up with your email address and you can choose from 12 servers around the world, which is pretty good.

If I had to choose, I would probably go with Windscribe because it has a longer and better track record. But there are no rules that say you can’t use both. Having two VPNs on the same system is usually not a problem.

Pricing plans for PrivadoVPN

When it comes to its paid plans, PrivadoVPN sits right in the middle of the pack at just under $60 per year. For that money, it gives you 10 above-average simultaneous connections and servers in 58 cities around the world. This number of servers is a little low, but it’s well spread across the world, so you should be fine.

It’s hard to say if PrivadoVPN is worth $60 a year; for one, it’s much cheaper than ExpressVPN, which costs $100 per year. On the other hand, you can sign up with Surfshark or NordVPN for their promotional prices and pay less than half for equivalent utility. Maybe if PrivadoVPN added a few more features or beefed up the ones it has, it could be a strong contender. As it stands, the free plan is the best.

What can PrivadoVPN do?

For $60 a year, you get a VPN that’s got the basics down, but has a few issues. Chief among these are its speeds, which I discuss in detail below, as well as its less-than-great interface. However, it has an ace up its sleeve, namely Netflix.

PrivadoVPN and Netflix

PrivadoVPN’s greatest strength is by far its quality for accessing Netflix, with the US servers doing a particularly excellent job. I tried three and all worked. As in my Surfshark review, this is surprising as smaller services usually had a really hard time getting through, but apparently that’s no longer the case, at least for now.

The UK servers also did a good job, although I had a bit more trouble there with one in three not working. However, the BBC iPlayer was accessible, which makes up for that very well.

Overall, if you like your streaming, PrivadoVPN seems like a good choice, as long as you can handle some speed issues.

Other PrivadoVPN Features

Before I get to that, though, I should probably point out that Netflix connectivity is pretty much everything when it comes to PrivadoVPN’s premium features. Unlike many competitors, who offer you features or even unnecessary features (like double VPN). PrivadoVPN just has…nothing. No split tunneling, no special servers. What you see is what you get.

While I have nothing against this kind of simple approach, it works best if a service handles the basics well. PrivadoVPN achieves this in most cases, but there’s one glaring problem I need to address: its speeds.

Internet speeds: they are everywhere

Testing the speeds of any VPN is an inexact science at best: time of day, distance, type of server, many different factors can affect the type of readings you get. However, it’s extremely rare to find a service like PrivadoVPN where speeds fluctuate so much.

Generally speaking, I like to connect to four places in the world from my location in Cyprus. I try to keep them more or less the same for each VPN I test: Israel, UK, New York, and Japan. I then ran each test three times, taking the best of those three. If I feel there is something strange in the reading, I repeat the process an hour later. I’ll usually also switch the VPN to the OpenVPN-TCP protocol, if it’s not already using that.

In the case of PrivadoVPN, the test results fluctuated so much that I had to run them three times and I don’t know how to put them in a table. For example, I tested the connection speed from Cyprus to New York and got readings ranging from incredibly good to really bad.

My base download speed over an unprotected connection was around 50 Mbps. The first time I tested the NYC server, I got an incredibly good playback of 42 Mbps. The second time, 25Mbps. The third was again around 40 Mbps. Other American servers have done much worse which is weird so I tried an hour later. Then I had much worse speeds, all around the 20 Mbps mark.

This process repeated itself with every other server I tried, all over the world. The only server that was still more or less the same was the one in Japan, which was uniformly horrible at around 5 Mbps.

It is extremely rare for a VPN to be this erratic and therefore I cannot recommend PrivadoVPN for its speeds as it fluctuates too much. Subscribers are entitled to some stability, especially if they are going to download torrent files or use their VPN for streaming.

UI: Copy ExpressVPN Homework

When it comes to usability, PrivadoVPN is very good. It’s clearly taken a leaf out of ExpressVPN’s book with a recent update, offering a simple interface that’s, essentially, just two buttons: a main button to turn it on and one to select a location. It’s quite nice.

PrivadoVPN main screen

There’s very little that can go wrong here, which makes it perfect for people who don’t need all the bells and whistles offered by some VPNs (looking at you, NordVPN.

Overall, PrivadoVPN gets the job done. Like Surfshark and NordVPN, the killswitch is disabled by default (seriously, why do so many VPNs do this?), but unlike them, the button to enable it is right there on the main screen. It’s a clever solution.

Speaking of settings, PrivadoVPN is quite simple here, offering extremely simple options. While I like the simplicity, if you like tinkering with your VPNs, you won’t get too much fun with PrivadoVPN.

PrivadoVPN settings screen

That said, the fact that there’s so little that can go wrong with PrivadoVPN’s interface might recommend it to users who want something they can turn on and not think about. The result is that the user interface is nothing revolutionary. Again, this is not necessary.

Security and privacy: the basics are covered

When it comes to security and privacy, PrivadoVPN seems to be doing well. No serious breaches have been reported and the company seems to take privacy very seriously. However, I wouldn’t bet too much on the company’s claims about how being located in Switzerland protects you. The Swiss authorities are more than happy to cooperate with law enforcement around the world.

Regarding security, I did not detect any problems. I ran security tests on several connections and nothing out of the ordinary came up except that some UK servers showed up as being in France. It happens sometimes, it’s usually not a big deal, but it can mean that PrivadoVPN uses virtual servers for certain locations.

However, there is a strike against the service: like Surfshark, PrivadoVPN by default uses IKEv2 as its VPN protocol, which I’m not a big fan of. Although extremely fast, it has security issues. As such, I recommend manually switching to OpenVPN in the settings menu.

PrivadoVPN Protocols

Should you use PrivadoVPN?

It is very difficult to sum up PrivadoVPN in a word or even a sentence. Some things it handles very well, some not so much, and yet other things are downright wonky. While it’s far from a bad VPN, it’s not exactly a good one either. While I’m not saying you should walk away from it, I wouldn’t recommend it either.

The thing is, there are just too many competitors who can do what he does, but have an edge over him. Surfshark is less expensive, NordVPN has more servers, ExpressVPN is faster, Mullvad is more private…the list goes on. By all means, give PrivadoVPN a try, but don’t be surprised if you end up using the 30-day money-back guarantee.



Here’s what we like

  • Generous free plan
  • Best for Netflix
  • Easy to use

And what we don’t do

  • Unreliable speeds
  • Not many features
  • Small interface
Solend votes to seize whale account • The Register Mon, 20 Jun 2022 14:46:00 +0000

Decentralized financial lending platform Solend tried to fend off the effects of the crypto meltdown over the weekend when 97.5% of its users voted to give it emergency powers to liquidate its biggest account customer. A second vote held today overturned the first.

Despite our best efforts, we were unable to get the whale to reduce its risk, or even come into contact with it.

Solend, which allows users to deposit cryptocurrencies to lend to other users, has a “whale” responsible for 95% of the SOL cryptocurrency deposited on its platform (worth 107 million of dollars). The same user borrowed 88% of the available USDC, a stablecoin pegged to the US dollar. All told, the account borrowed around $108 million in USDC and Ethereum. According to Solend websiteit loaned a total of $195 million in crypto assets.

“Despite our efforts, we were unable to get the whale to reduce its risk, or even to come into contact with it,” Solend said in his voting proposal.

The successful vote gave Solend the option to liquidate the user’s account via OTC trades if SOL fell to $22.30. It has come dangerously close to doing so recently – SOL is currently trading in the $35 range but fell to $26 last week as other cryptocurrencies fell.

At its peak alongside other cryptocurrencies in November 2021, SOL hit $260. Given its drop in value and its proximity to the point of liquidation, it’s understandable why Solend might be worried.

“CeFi at its best”

Comments on the governance page regarding the first such vote in Solend’s history are a mix of support for what some voters saw as proactive customer protection and frustration that the move represents the opposite of decentralized finance. .

Decentralized finance (defi) avoids intermediaries, banks and other institutions vested with authority that make centralized decisions. Ideally more democratic in practice, defi has proven to be ripe for the picking by cybercriminals. The unregulated nature of the challenge and the cryptocurrency was also a key part of its meteoric fall from the highs of November 2021. In just over six months, the total market capitalization of the cryptocurrency fell to less than a third of its maximum value.

Solend’s vote was done to give it a chance to beat margin-triggered trading bots that would presumably act in the interests of Solend’s whale and sell SOL tokens as fast as possible.

The OTC trade that Solend would conduct on behalf of the whale would imply sell to a specific buyer at an agreed price.

A second vote, held less than a day after the first, invalid the initial policy change and garnered more support than the first.

In addition to depriving Solend of the ability to seize the whale account, it also increased the governance voting time to one day and gives Solend time to “work on a new proposal that does not involve powers urgently to resume an account”.

Celsius, another crypto lending platform, has arguably been at the heart of some of the latest declines in the value of cryptocurrency. Like Solend, Celsius allows users to deposit funds that others can borrow, and last week it froze all activity.

Celsius on Sunday updated its users, saying this is continuing, and that Celsius is also freezing activity on its Twitter spaces and suspending Q&A sessions. ®

Development of a robust technique for the transmission of synchronized data in real time from a Magnetic Observatory to an INTERMAGNET GIN Sat, 18 Jun 2022 10:10:35 +0000

Since internet availability at PowerLine is very limited due to its remote location from a city, we approached a reliable permanent fiber optic setup from BSNL (Bharath Sanchar Nigam Limited) maintained by the government. from India. But it was too expensive to set up and maintain, so we used the facilities of a local service provider, with a maximum bandwidth of 20 Mbps to initiate the data transfer technique.

Initial setting

Online data transfer from CPL to HYB observatory started using cross-platform data transmission as ISP (internet service provider) resources were not available. As the service provider could not resolve some TCP/IP network issues regarding data transmission from one Linux machine to another remote Linux machine, we had to perform a cross-platform data transmission process, because the final data had to be processed on Windows-based Matlab codes.

Initially, we implemented shell scripts, cron jobs and the rsync protocol to transfer data from the Magrec-4B data logger to an intermediate Linux machine (Centos) deployed at PLC. The data was transferred from Magrec-4B to the Linux machine (backup storage) at the PLC control room with a latency of 5 min, then it was transferred to a Windows machine (client) at HYB Observatory using codes , scripts developed by us and , third-party tools (Fig. 2). The bandwidth being low, we decided to transfer the data from the Linux machine to the Windows pc at HYB-NGRI with a time-lapse of 1 min.

Figure 2

Cross-platform data transfer system from Linux PC (deployed at PLC) to Windows PC (deployed at HYB) and percentage of successful data transmission between systems.

We installed a batch file with the “Abort” option and confirmed with the “Off” option to check the health of the connection on the client side (Windows PC), iterated for a default delay of 120 s. The session begins by verifying the host ID username and pre-entered password authenticated with the RSA key (Rivest, Shamir and Adelman) via SFTP (Secured File Transfer Protocol). The terms ‘Comparison’ and ‘Synchronization’ in the figure show the details of data transmission from the host to the client machine at conventional intervals with a time interval of 120s.

From Magrecc-4B, we selected 9 data parameters as shown in Fig. 2, to transmit real-time data to the client machine. Details of file size, each data parameter, and how fast data is transmitted from the host to the client machine are included. The percentages in column 5 of Figure 2 show the process of transmitting and updating data from the client machine. 100% data transfer is only achieved when data is copied with the last 120s records. Additionally, the client machine double-checks the data by synchronizing previous records of the current day. The example of the perpetual data transmission process with the latest records and the update process is also shown in line 9 of Fig. 2. Once the data is synchronized with the latest records (e.g. the filename of line 9 in Fig. 2), the 23 % transmission of the file will become 100% at the end of this task, in further synchronizing with previously recorded data. The file size of the above said nine parameters keeps increasing every 120s of updated data on the host machine. The whole process is repeated for each cycle of 120 s until the day is over.

As a large amount of data from the two observatories needs to be transferred and requires dedicated storage to back up the data on a daily basis, we have set up a server at the HYB observatory. And also, at CPL, the internet network services have recently been upgraded with the increased bandwidth of 50 Mbps (which is the maximum bandwidth available today), which allowed us to configure the technique automated robust data transmission to GIN and the details thereof are discussed below.

Final Setup

Since our main objective was to achieve an automated transmission of data within 1 minute from the HYB and CPL observatories to GIN, we had to make additional R&D efforts to develop a robust configuration concerning both the hardware (that is i.e., high-end workstation, firewall router configuration) and software. Thus, Python code, shell scripts, cron jobs and rsync protocol have been developed to support data transmission without data loss. Even if internet services are disconnected, once internet services are restored, the Python code will recheck the data from the last successfully transmitted file.

The transfer of data from CPL and HYB to the central server located at the HYB observatory follows the RSH and SSH key algorithm which is in itself a very secure algorithm. We have designed a system to transfer the data in a secure and encrypted model with SSH keys and save the same data set on the local CSIR-NGRI server. We used the RSA-SSH (Rivest–Shamir–Adleman) algorithm, which is a widely used public-key cryptosystem for secure data transmission. The key generated by the ssh-keygen in the source machine (MAGREC-DAS) will create two files namely “id_rsa & in the .ssh directory, which is shared/copied to the destination machine (Centos). There are so a perfect handshake between the source machine and the destination machine for data transfer.This configuration stays the same unless the network stays the same, that’s why we assigned a static IP address.In addition to the keys ssh, a code was written to transfer the data using the ‘rsync tool’ and the same was instilled in the ‘crontab’ to keep repeating the same with a 10s delay. same technique was also used at HYB observatory from Centos machine to server for secure and successful data transmission.

After successful R&D efforts of transmitting data from both observatories to a dedicated high-end Linux server, with a 24TB RAID-5 configuration at HYB Observatory, we created individual user accounts on the server, i.e. IMO-CPL, IMO-HYB, to store the data received from the respective observatories. The developed Python code will transfer several types of data from the DAS and store them daily in the respective user accounts (Fig. 3). Scripts developed from each Linux PC will filter data based on directory requirements (i.e., GIN). The sorted data of the individual directory will be transmitted with a latency period of 300 s to INTERMAGNET GIN.

picture 3
picture 3

Automated data transmission in 1 min from (a) PLC and (b) HYB Observatories in Edinburgh GIN using Python code.

After a successful transmission of data from both Observatories to the GIN, we encountered a few minor issues, and how we resolved them is discussed in detail:

Number 1 Initially, the Python code was executed using the “rsync synchronization protocol” with a minimum latency period of 60 s to transfer real-time data from the two observatories. As reported by GIN experts, with this latency period, the same data was repeatedly sent to the receiving web service (, Figs. 4a, due to which the GIN storage/cache memory was receiving huge volumes of data from both Observatories. This caused problems for their entire web service, log files filling up very quickly and the web service data cache was difficult to use as it took up a lot of disk space (Fig. 4b).

Figure 4
number 4

(a) Details of the data cache for the two Observatories on the INTERMAGNET BGS website (b) error message saying “no space left” due to huge amount of duplicate data on BGS server.

The solution To solve the above problem, we created background daemons instead of “rsync synchronization protocol”, so the data recheck every 60s changed to 300s. The backend daemons will execute Python code every 300s for smooth real-time data transmission without any duplication (as shown in Fig. 3).

Number 2 After successful transmission of data from both Observatories, on a few occasions the data tracing services at INTERMAGENT were not taken into account even though our hardware and software were intact. We cross-checked the logs on our end and found that the data was successfully uploaded to GIN. Even if the data records are successful, the reason why the data was not plotted on the INTERMAGNET website was unknown.

The solution The above issue was resolved after BGS experts suggested a link ( to upload a one day file to check whether it was successful or not? As suggested by BGS, if the data download was not successful and with some errors (Fig. 4), there is a problem with the INTERMAGNET server. This verification allowed us to see that the code we are running works correctly (Fig. 5).

Figure 5
number 5

The overlap of (a) PLC and (b) HYB data logs from HYB Observatory server to GIN server.

Introducing Ghostwriter v3.0 – Security Boulevard Tue, 14 Jun 2022 22:56:44 +0000

The Ghostwriter team recently released v3.0.0. This release represents an important milestone for the project, and there’s never been a better time to try Ghostwriter.

Our goal was to greatly simplify the installation and management of the application and to allow the addition of external functionality via an API. This release accomplishes all of that and more, and we’re excited for you to see it.

DevOps/Cloud-Native Live!  Boston

Introducing Ghostwriter CLI

For this release, we’ve created a brand new tool to help you manage Ghostwriter services, Ghostwriter CLI!

GitHub – GhostManager/Ghostwriter_CLI: Golang CLI binary used to install and manage Ghostwriter

Written entirely in Go, this command-line tool can be cross-compiled to support Windows, macOS, and Linux, so you can use any operating system you want as a host system for Ghostwriter. You only need to have Docker installed.

Ghostwriter CLI greatly simplifies server management. Current Ghostwriter users will notice that we have removed the need for old environment files. We’ve even removed the requirement for you to generate TLS/SSL certificates for production environments (unless you want to use your own signed certificates).

$ ./ghostwriter-cli help
Ghostwriter-CLI ( v0.1.1, 8 June 2022 ):
*** source code: ***
Displays this help information
install {dev|production}
Builds containers and performs first-time setup of Ghostwriter
build {dev|production}
Builds the containers for the given environment (only necessary for upgrades)
restart {dev|production}
Restarts all Ghostwriter services in the given environment
up {dev|production}
Bring up all Ghostwriter services in the given environment
down {dev|production}
Bring down all Ghostwriter services in the given environment
** No parameters will dump the entire config **
get [varname ...]
set <var name> <var value>
allowhost <var hostname/address>
disallowhost <var hostname/address>
logs <container name>
Displays logs for the given container
Options: ghostwriter_{django|nginx|postgres|redis|graphql|queue}
Print a list of running Ghostwriter services
Displays version information for the local Ghostwriter installation and the latest stable release on GitHub
Runs Ghostwriter's unit tests in the development environment
Requires to `ghostwriter_cli install dev` to have been run first
Displays the version information at the top of this message

The new quick install guide describes how to use Ghostwriter CLI:

Quick Start

We will continue to develop this new tool to simplify server updating and other maintenance tasks.

Finalizing the GraphQL API

After Ghostwriter, you may have heard of the GraphQL API in the past year. The initial version of the API is production ready and will soon replace the old minimal REST API! GraphQL API documentation is available here:


Ghostwriter uses fantasy Hasura GraphQL Engine to manage the API. You can access the Hasura console to explore and expand your queries.

Running a “whoami” query in the console

The new API allows you to interact with all aspects of Ghostwriter to perform tasks such as:

  • Domain categorization update
  • Synchronize your domain library with a registrar
  • Extract project data into a workflow or custom reporting tool
  • Exporting results from a tool like Burp Suite into a Ghostwriter report
  • Push new projects and assignments from a CRM or project planner

The API offers many possibilities for integration with external tools. For example, SpecterOps uses the API to transfer infrastructure information from an external application to Ghostwriter. Each time the app creates a new server for a project, it updates Ghostwriter’s project dashboard.

With this new API, managing API tokens is simplified. Users can now visit their profiles to generate API tokens and view or revoke existing tokens.

Managing API tokens in a user profile

To note: Until we update cobalt_sync and mythic_sync, Ghostwriter will still issue the old REST API keys for logging activities with these tools. Soon, these projects will transition to using the GraphQL API and new API tokens, and a future v3.xx release will remove old REST API endpoints and keys. This delay will also give time for all other projects using the REST API to switch to the GraphQL API.

New CVSS calculator

This release also supports CVSS scores for results. This feature was a popular request in our user survey, which @therealtoastycat on GitHub supported and contributed to the project.

You will see CVSS Rating and CVSS Vector fields when modifying a result. You can fill in these fields or use the new CVSS calculator to automatically set the score, vector, and gravity dropdown!

The new CVSS calculator in action


These new features and improvements are some of the biggest changes in v3.0.0, but there’s a full changelog with even bigger tweaks. We’ve fixed a few bugs, added support for quote formatting in Word reports, improved the use of date filters in reports, and more.

You can view the full list here:

Ghostwriter/ at master GhostManager/Ghostwriter

We’re working on examples to show how you can leverage the GraphQL API for automation, pull/push insights, and more. In August, we will present these examples and Ghostwriter v3 at Black Hat USA Arsenal. We’ll announce where you can find Ghostwriter once Black Hat updates the schedule.

If you miss Ghostwriter at Arsenal, you can also find us at the SpecterOps stand. We hope to see you there!

Introducing Ghostwriter v3.0 was originally published in Messages from SpecterOps team members on Medium, where people continue the conversation by highlighting and responding to this story.

*** This is a syndicated blog from the Security Bloggers Network of SpecterOps Team Member Posts – Medium written by Christopher Maddalena. Read the original post at:—-f05f8696e3cc—4

System Engineer (Linux Focus) at 2Cana Solutions Mon, 13 Jun 2022 14:50:10 +0000

Travel: Limited, but may be required both locally and internationally.
Location: Remote, but should be a reasonable driving distance from our Durban office
Staff: None
On call: every other week

We are looking for an experienced and motivated Systems Engineer with excellent technical skills to join the infrastructure team of our growing organization to support enterprise systems for our customers and ourselves. The position is primarily Linux focused, but would involve other related aspects such as virtualization, cloud, storage, application and hardware support.

General description of the role
The role will include some/all of the following:

  • Plan and coordinate the installation, administration, maintenance and upgrade of Linux server operating system, hardware, software and virtualization technologies, both local and remote
  • Administration of backup software and formulation and testing of disaster recovery strategies
  • Analyze system performance, formulate and implement recommendations for system improvements and tuning.
  • Support servers and systems, and diagnose and resolve system issues
  • Research and development in the Linux environment
  • Automation of administrative tasks by learning new technologies to improve efficiency and maintain effective management of our growing environment
  • Performance monitoring, troubleshooting and tuning, daily checks and administration of monitoring systems (Nagios, etc.)
  • Monitor and administer mail, web and reverse proxy servers
  • System security management, including server hardening and regular patching, using automation where possible. Monitor system security.
  • Document installations, system modifications, DR and installation procedures, standards and methodology
  • Liaise with development teams and clients regarding projects and day-to-day requirements; provide support to users and developers based on helpdesk calls

  • Experience in these areas is an asset:

  • Oracle engineered systems, specifically Oracle Database Appliance (ODA)
  • Oracle Linux Virtualization Manager (OLVM) / KVM
  • Ansible or similar automation tools
  • Commvault Backup Solutions
  • A working understanding of Windows systems and networking is an advantage
  • Shell/Perl scripting experience
  • SAN experience and storage protocols


  • RHCSA / RHCE / Linux+ / Oracle Certified Professional / LPIC or similar Linux certification
  • Diploma or bachelor’s degree in information technology an asset

Desired skills:

  • – A minimum of 3 years of experience in a Linux administration role is essential
  • – Solid knowledge of Oracle Linux 6/7/8 (or Red Hat/Centos/Debian)
  • – Ability and desire to quickly learn and implement new technologies
  • – Excellent attention to detail and problem solving skills
  • – Strong business writing and documentation skills
  • – Excellent team player
  • – Good customer service skills
  • – Excellent decision-making and organizational skills in time management; able to work independently and take ownership of problems
  • – Effective communication and interpersonal skills
  • – Strong ability to establish and maintain relationships with people

Desired work experience:

  • 2 to 5 years Systems / Network Administration

Desired level of qualification:

About the employer:

About 2Cana Solutions
We are a dynamic and exciting software company with local and international clients. We are an Oracle Strategic Partner with a strong focus on technical excellence and a passion for helping our customers succeed. Our main focus is the insurance industry. We set up and develop for major insurance companies locally and [URL Removed] take pride in what we do, work professionally with our clients while fostering a culture of learning and sharing within our team.

Find out more/Apply to this position

All you need to know Sat, 11 Jun 2022 13:15:00 +0000

Installing a few smart plugs, switches, or bulbs and controlling them through your smartphone doesn’t make your home smart. A smart home should be smart enough to make decisions, automate your devices, and send notifications and alerts based on events, time, or information from various sensors.

Although some manufacturers provide some basic options in their apps to automate their smart devices, they are connected to cloud servers and do not work if the network or internet is down, which makes them unreliable. Additionally, they may also log or collect your activity data on their cloud servers, such as when, where, and how you use your smart devices.


What is Home Assistant and why would you want to use it?

Home Assistant (HA) is a free and open source home automation software that helps you create a localized smart home with complete privacy. It is a flexible, reliable and more secure solution than its cloud-based alternatives, such as Homebridge, SmartThings or Alexa Routines.

HA lets you control and access your smart home devices over the local network. So your smart home is not dependent on cloud servers or internet connection and will continue to work regardless of internet availability. Since it’s local, it’s also faster and more consistent.

You can integrate all your compatible smart devices, such as sockets, switches, lights and sensors, with Home Assistant, control them individually or in groups and create automation.

You can also create DIY Home Assistant smart switches, lights and sensors and use them to automate your home in privacy.

However, if you already use smart devices at home or in the office, chances are they will work with Home Assistant as it supports over 1900 devices and services. If the devices are connected to your network, Home Assistant will automatically scan and detect known devices, which you can configure and control through the web interface or the Home Assistant app UI.

What can the domestic assistant do?

Home Assistant is like a smart hub that you can use to add all your smart devices, integrate them as entities, and control them from a single web interface or HA app on a smartphone or tablet. It can also improve the functionality of smart devices and provide more features.

Home Assistant also lets you control your devices through Alexa or Google Home Assistant smart speakers, though it does require a Nabu House subscription.

If you have smart devices installed at home or in the office that you currently control through different mobile apps, you can integrate them with Home Assistant to control them individually or in groups.

You can add rule-based automation where you can create routines or trigger devices based on time, event, conditions, and actions. You can also add automation scripts to define or specify a sequence of actions that Home Assistant will perform when the script is activated.

For example, you can build a smart water/salt level sensor using an ultrasonic sensor and an ESP8266 board to measure tank volume and send notifications to your smartphone and voice alerts via the Alexa smart speaker when the tank level reaches a certain depth.

Similarly, you can also create a DIY smart home energy monitoring device that reports real-time energy consumption to the Home Assistant interface. It records all logs and tracks daily energy monitoring on an hourly basis. You can also add the cost per KWh of energy to see your electricity bills.

If you have a traditional air conditioner or HVAC unit, you can use Home Assistant to add Wi-Fi control and make your air conditioner smart without touching the unit.

We have already covered several guides on building DIY Home Assistant smart devices to automate your home. You can check out our DIY section to learn more.

We highly recommend building DIY smart devices as they do not require internet or third-party servers to work and natively integrate with Home Assistant. Using ESP home and Tasmota Firmwareyou can quickly build and deploy smart switches, lights, and sensors in 3D printed enclosures for a polished look.

What are the potential drawbacks of Home Assistant?

There are a few caveats about using Home Assistant that you should consider before deploying one in your home.

  1. With Home Assistant, the learning curve is steep. You’ll have to go through the extensive documentation to learn Home Assistant and perform hits and tests to make sure everything works.
  2. Home Assistant receives regular updates that fix security bugs and add improvements. When a major update arrives, old tutorials or guides may no longer work or become outdated and require a different approach or manual tweaks that you may need to figure out.

However, there is a huge community to help you out if you run into any issues or need help fixing. You may find most of the solutions already solved by others. Also, once you set up a few devices, you will understand most things related to integrating and controlling your smart devices.

What do you need for a home assistant installation?

You can install Home Assistant on the following devices:

  1. the Windows
  2. Mac
  3. linux
  4. Intel NUC based systems (old laptops)
  5. ASUS motherboard
  6. odroid
  7. Raspberry Pi 3 or 4

To access the Home Assistant dashboard to control devices, you can use the Home Assistant app available for iOS, iPadOS, and Android smartphones or use a web browser on any compatible device.

How to install Home Assistant

Although there are four different ways to install Home Assistant, it is recommended that you follow one of the following two methods to install Home Assistant on compatible hardware:

Home Assistant operating system (with supervisor)

This version of Home Assistant comes with a supervisor to manage Home Assistant core and add-ons. It’s much easier to set up and doesn’t require you to change settings manually or through the command line. You can install HA OS on single board computers, such as Tinkerboard, Odroid, or Raspberry Pi. We recommend using this method to install and configure Home Assistant on a Raspberry Pi 4 with at least 4 GB of RAM.

Home Assistant Container (Without Supervisor)

You can also install Home Assistant on a Docker container. However, this does not come with Supervisor and Additional modules. You must manually install the required add-ons via command line or terminal. You can use this method to install HA on Windows, Mac, or Linux PCs and older laptops.

Home assistant for a secure and private smart home

With Home Assistant, you can create a truly private and more secure smart home than cloud-based solutions. You can buy smart devices or build them yourself and integrate them with Home Assistant. If you want your activity data to remain private, consider deploying Home Assistant for home automation and smart device control.

KIOXIA Announces Optimized Platform Certified with KumoScale Software | New Thu, 09 Jun 2022 21:17:07 +0000

SAN JOSE, Calif.–(BUSINESS WIRE)–June 9, 2022–

KIOXIA America, Inc. is pleased to announce that Dell Technologies OEM Solutions will provide a platform with optimized configurations for KIOXIA Kumo Scale ™ storage software. Based on Dell PowerEdge R6525 rack server using KIOXIA CM6 Series Enterprise NVM Express™ (NVMe™) SSDs, the systems will be available in configurations well-suited to the demands of high-performance network attached storage.

This press release is multimedia. See the full version here:

KumoScale software-ready certified system configurations are available from Dell’s distributor, Arrow Electronics. (Graphic: Business Wire)

“We are pleased to work with Dell OEM Solutions to certify the PowerEdge R6525 rack servers for use with KumoScale software,” said Joel Dedrick, vice president and general manager, Network Storage Solutions at KIOXIA America, Inc. Customers now have the option to purchase a variety of pre-configured and certified systems for high-performance on-premises cloud infrastructure.

KumoScale software-ready certified system configurations are available from Dell’s distributor, Arrow Electronics. Configurations offered include single and dual AMD EPYC™ processors, come equipped with KIOXIA CM6 enterprise-grade NVMe SSDs, and offer capacities of up to 153TB per node.

“Our collaboration with Dell and KIOXIA allows us to serve as a key integrator of this exciting and powerful solution combining the Dell platform and KIOXIA KumoScale software,” said James Stannard, vice president of sales for the Intelligent Solutions business. of Arrow in the EMEA region. “Customers and end users can rely on Arrow’s broad geographic presence and technical know-how to select the right system to meet their specific needs. Arrow Electronics offers customers a selection of Dell PowerEdge rack servers for customers installing KumoScale storage software. More information regarding platform details, pricing, warranty and ordering can be obtained by emailing us at

KumoScale Software Delivers Powerful Network Storage for On-Premises Data Centers

Designed for data center-scale deployment, the KumoScale storage system delivers high-performance NVMe flash storage as a disaggregated network service at cloud-native scale and cost. In addition to the Dell PowerEdge R6525, KumoScale software is designed to run on other x86 servers (Intel ® and AMD ® ).

KumoScale software integrates tightly with Kubernetes ® and OpenStack ® platforms, as well as popular automation, telemetry and logging frameworks. KumoScale software eases IT workloads through highly automated provisioning, management, and optimization of storage resources at scale. Please visit to learn more.

About KIOXIA America, Inc.

KIOXIA America, Inc. is the American subsidiary of KIOXIA Corporation, one of the world’s leading suppliers of flash memory and solid-state drives. From the invention of flash memory to today’s revolutionary BiCS FLASH™ 3D technology, KIOXIA continues to pioneer innovative memory, SSD and software solutions that enrich people’s lives and expand the horizons of society. The company’s innovative 3D flash memory technology, BiCS FLASH, is shaping the future of storage in high-density applications including advanced smartphones, PCs, SSDs, automotive and data centers. For more information, please visit

© 2022 KIOXIA America, Inc. All rights reserved. The information in this press release, including product prices and specifications, service content and contact information, is current and believed to be accurate as of the date of the announcement, but is subject to change without notice. . The technical and application information contained herein is subject to the latest applicable KIOXIA product specifications.


* KIOXIA has run certification tests on Dell EMC PowerEdge R6525 platform to verify configuration operations, deployment modes, NVMe-oF IO testing between initiator and target, and hardware compatibility with various SSDs NVMe.

Dell, Dell Technologies and PowerEdge are registered or unregistered trademarks of Dell Inc.

The Arrow name and logo and all related product and service names, design marks and slogans are trademarks, service marks or registered trademarks of Arrow and may not be used in any way without the prior written permission of Arrow. Other product and service marks are trademarks of their respective owners.

The word marks NVM Express, NVMe and NVMe-oF are registered and unregistered trademarks and service marks of NVM Express, Inc.

Kubernetes is a registered trademark of the Linux Foundation in the United States and other countries, and is used under license from the Linux Foundation

The OpenStack ® Word trademark is a registered trademark of the OpenStack Foundation, in the United States and other countries and is used with permission by the OpenStack Foundation. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation or the OpenStack community.

AMD EPYC and combinations thereof are trademarks of Advanced Micro Devices, Inc.

Intel is a registered trademark of Intel Corporation or its subsidiaries in the United States and/or other countries.

All company names, product names and service names may be trademarks of their respective companies.

Show source version on


Dena Jacobson

Lages & Associates

Such. : (949) 453-8080 COMPANY CONTACT:

mia cool

KIOXIA America, Inc.

Such. : (408) 526-3087



SOURCE: KIOXIA America, Inc.

Copyright BusinessWire 2022.

PUBLISHED: 06/09/2022 5:15 p.m. / DISK: 06/09/2022 5:17 p.m.

Fix “Java Not Recognized” Error Wed, 08 Jun 2022 05:05:30 +0000

How to Fix “Java Not Recognized” Error

Here are the three easiest ways to fix Java’s “not recognized as an internal or external command” error:

  1. Install or reinstall Java and the JDK on your computer
  2. Add Java’s bin directory to computer’s PATH
  3. Restart the command prompt, terminal window or Powershell

If the JDK is not installed or the PATH is misconfigured, a “Java not recognized” error will occur.

Is Java installed?

To run the Java command, you must have Java installed.

This may mean that the Java Runtime Environment (JRE) or the full Java Development Kit (JDK) is installed. But to fix the “Java not recognized” error, you must first ensure that Java has been installed.

There are many ways to install Java on Windows or Ubuntu.

Install Java on Ubuntu

To install Java on Ubuntu, a single apt install the order is sufficient.

sudo apt install default-jre
Setting up default-jre (2:1.11-72build2) ...

A Java installation on Windows can be done quickly by downloading the Adoptium JDK and running the .msi file with all the default options selected.

Unrecognized Java patch

If Java is not installed, you will not be able to fix the “Unrecognized Java” error.

Add Java to the PATH

The bin directory of the JDK installation is where the Java executable is located.

If the JDK or JRE’s bin directory is not added to the Windows or Linux PATH variable, programs may not be able to find it at runtime. This would definitely cause the “Java not recognized” error, even if the JDK or JRE is correctly installed.

Java path not recognized

Make sure the java bin directory is on the path to avoid internal or external command errors.

Restart the terminal window

Terminal window and command prompt only read environment variables on first boot.

If the JDK has been installed and the PATH variable is set correctly, you will still encounter a “Java not recognized” error if PowerShell or the command prompt has not been restarted.

Restart your terminal window or application that needs to find the Java command, then try again. The “Java not recognized” error should be gone for good.

Full text of Java error

The full text of the Java not recognized error is as follows:

C:java-error-fix> java -version 
'java' is not recognized as an internal or external command, 
operable program or batch file.

When Java is installed and configured correctly, this error disappears.

Japan lets its banks issue stablecoins • The Register Mon, 06 Jun 2022 06:55:00 +0000

Japan’s parliament has passed legislation allowing yen-linked stablecoins, becoming one of the first countries – and by far the largest economy – to regulate a form of non-fiat digital currency.

The regulations state that only banks and other registered financial institutions — such as money transfer agents and trust companies — can issue Alterna-Cash. Intermediaries, or those responsible for the movement of currency, will be required to adopt stricter anti-money laundering measures. The rules also define stablecoins as digital currency and guarantee repayment of face value.

The Japan Financial Services Agency (FSA) launched the scheme in a March 2021 proposal. Parliamentary approval of the proposal means it will come into force in 2023. The regulations will apply to domestic financial institutions as well as foreign operations that target Japanese users. The research material supporting the decision relied heavily on trends in the US and Europe.

On the day of the decision, the FSA published a document that considers the global use of stablecoins and advocates their use in Japan – with appropriate regulation.

In the United States, the FSA noted, stablecoins are unregulated, but those who handle them must take heed of anti-money laundering laws and other laws such as anti-money laundering regulations. terrorism.

The document [PDF] also cites tightened regulations in the UK and Singapore. In January, the Monetary Authority of Singapore (MAS) took action to limit the promotion of digital payment tokens and the UK Treasury tightened regulations on the solicitation of sales of certain crypto assets.

Meanwhile, Mitsubish UFJ Trust and Banking Corp. said [PDF] once the legal framework is in place, it will launch a yen-backed stablecoin called Progmat Coin.

Government regulation of stablecoins is likely to be welcome, given the dramatic implosion of the so-called stablecoin TerraUSD. This cryptocurrency saw its value drop by 90% in May 2022. After this plunge, the value of the linked cryptocurrency Luna fell to almost nothing, suggesting that DIY currency systems lack the maturity that maintains fiat currencies afloat. ®