March 13, 2023
Galicia Seeks 'Carbon-Neutral' Data Center Amid Sector's Rising Energy Use
Impulsa Galicia and Ingenostrum have teamed up with the goal of building a €400 million (US$426.5 million) carbon-neutral data center in Spain. According to Impulsa Galicia – a public-private initiative backed by the Galicia region's decision-making body – the center could accommodate data from most Galician businesses. Although the companies have only agreed to conduct a feasibility study, the project serves as a reminder of the growing need to increase data center capacity while slashing emissions.The 15-megawatt (MW) center would follow a larger, 70MW one in Cáceres, in the Extremadura region, also planned by Ingenostrum. While that project too is dubbed carbon-neutral, there is no mention of batteries or other storage technology, begging questions about what carbon-neutral electricity will power the data center when the sun isn't shining, which happens even in Spain.Related: Grid Interactive UPS Systems and the Race to Carbon-NeutralityWhen it comes to the green credentials of data centers – or pretty much anything else – the devil is always in the details. Data centers and data transmission each account for between 1% and 1.5% of global electricity consumption, according to the International Energy Agency (IEA), which sums up the sector's progress towards the goal of reaching net zero by 2050 as "more efforts needed."While demand and energy use are expected to grow in the coming years, the industry needs to significantly reduce its emissions if it is to align with the global goal of reaching net zero by 2050. According to IEA, emissions need to be halved by 2030. As a result, pressure from both government and customers is growing to make data centers greener.Related: EU Eyes Carbon-Neutral Data Centers by 2030 in Green-Tech SwitchIn the data center operators' defense, they have managed to significantly improve energy efficiency. Since 2010, emissions have increased only modestly, despite demand skyrocketing as global Internet traffic grew 20-fold. The largest data center operators have also started to contract renewable energy for their facilities, with Amazon, Microsoft, Meta and Google becoming the four largest buyers of corporate renewable power purchase agreements. But that still does not erase their growing energy consumption and emissions footprint.Continue reading this article on Light Reading
March 04, 2023
FBI and CISA warn of increasing Royal ransomware attack risks - Bleeping Computer
CISA and the FBI have issued a joint advisory highlighting the increasing threat behind ongoing Royal ransomware attacks targeting many U.S. critical infrastructure sectors, including healthcare, communications, and education.This follows an advisory issued by the Department of Health and Human Services (HHS), whose security team revealed in December 2022 that the ransomware operation had been linked to multiple attacks against U.S. healthcare organizations.In response, the FBI and CISA shared indicators of compromise and a list of tactics, techniques, and procedures (TTPs) linked, which would help defenders detect and block attempts to deploy Royal ransomware payloads on their networks."CISA encourages network defenders to review the CSA and to apply the included mitigations," the U.S. cybersecurity agency said on Thursday.The federal agencies are asking all organizations at risk of being targeted to take concrete steps to protect themselves against the rising ransomware threat.To safeguard their organizations' networks, enterprise admins can start by prioritizing the remediation of any known vulnerabilities attackers have already exploited.Training employees to spot and report phishing attempts effectively is also crucial. Cybersecurity defenses can further be hardened by enabling and enforcing multi-factor authentication (MFA), making it much harder for attackers to access sensitive systems and data.Samples submitted to the ID-Ransomware platform for analysis show that the enterprise-targeting gang has been increasingly active starting late January, showing this ransomware operation's huge impact on its victims.Royal ransomware sample submissions (ID-Ransomware)​Request for Royal incident reportsEven though the FBI says that paying ransoms will likely encourage other cybercriminals to join the attacks, victims are urged to report Royal ransomware incidents to their local FBI field office or CISA regardless of whether they've paid a ransom or not.Any additional information will help collect critical data needed to keep track of the ransomware group's activity, help stop further attacks, or hold the attackers accountable for their actions.Royal Ransomware is a private operation comprised of highly experienced threat actors known for previously working with the notorious Conti cybercrime gang. Their malicious activities have only seen a jump in activity since September, despite first being detected in January 2022.Even though they initially deployed encryptors from other operations like BlackCat, they have since transitioned to using their own.The first was Zeon, which generated ransom notes similar to those used by Conti, but they switched to a new encryptor in mid-September after rebranding to "Royal."The malware was recently upgraded to encrypt Linux devices, specifically targeting VMware ESXi virtual machines.Royal operators encrypt their targets' enterprise systems and demand hefty ransom payments ranging from $250,000 to tens of millions per attack.This ransomware operation also stands out from the crowd due to its social engineering tactics to deceive corporate victims into installing remote access software as part of callback phishing attacks, where they pretend to be software providers and food delivery services.In addition, the group employs a unique strategy of utilizing hacked Twitter accounts to tweet out details of compromised targets to journalists, hoping to attract news coverage and add further pressure on their victims.These tweets contain a link to leaked data, which the group allegedly stole from the victims' networks before encrypting them.
March 07, 2023
FACT SHEET: Biden-Harris Administration Announces National Cybersecurity Strategy | The White House
Read the full strategy hereToday, the Biden-Harris Administration released the National Cybersecurity Strategy to secure the full benefits of a safe and secure digital ecosystem for all Americans. In this decisive decade, the United States will reimagine cyberspace as a tool to achieve our goals in a way that reflects our values: economic security and prosperity; respect for human rights and fundamental freedoms; trust in our democracy and democratic institutions; and an equitable and diverse society. To realize this vision, we must make fundamental shifts in how the United States allocates roles, responsibilities, and resources in cyberspace.We must rebalance the responsibility to defend cyberspace by shifting the burden for cybersecurity away from individuals, small businesses, and local governments, and onto the organizations that are most capable and best-positioned to reduce risks for all of us. We must realign incentives to favor long-term investments by striking a careful balance between defending ourselves against urgent threats today and simultaneously strategically planning for and investing in a resilient future.The Strategy recognizes that government must use all tools of national power in a coordinated manner to protect our national security, public safety, and economic prosperity.VISIONOur rapidly evolving world demands a more intentional, more coordinated, and more well-resourced approach to cyber defense. We face a complex threat environment, with state and non-state actors developing and executing novel campaigns to threaten our interests. At the same time, next-generation technologies are reaching maturity at an accelerating pace, creating new pathways for innovation while increasing digital interdependencies.This Strategy sets out a path to address these threats and secure the promise of our digital future. Its implementation will protect our investments in rebuilding America’s infrastructure, developing our clean energy sector, and re-shoring America’s technology and manufacturing base. Together with our allies and partners, the United States will make our digital ecosystem:Defensible, where cyber defense is overwhelmingly easier, cheaper, and more effective;Resilient, where cyber incidents and errors have little widespread or lasting impact; and,Values-aligned, where our most cherished values shape—and are in turn reinforced by— our digital world.The Administration has already taken steps to secure cyberspace and our digital ecosystem, including the National Security Strategy, Executive Order 14028 (Improving the Nation’s Cybersecurity), National Security Memorandum 5 (Improving Cybersecurity for Critical Infrastructure Control Systems), M-22-09 (Moving the U.S. Government Toward Zero-Trust Cybersecurity Principles), and National Security Memorandum 10 (Promoting United States Leadership in Quantum Computing While Mitigating Risks to Vulnerable Cryptographic Systems). Expanding on these efforts, the Strategy recognizes that cyberspace does not exist for its own end but as a tool to pursue our highest aspirations.APPROACHThis Strategy seeks to build and enhance collaboration around five pillars:1. Defend Critical Infrastructure – We will give the American people confidence in the availability and resilience of our critical infrastructure and the essential services it provides, including by:Expanding the use of minimum cybersecurity requirements in critical sectors to ensure national security and public safety and harmonizing regulations to reduce the burden of compliance;Enabling public-private collaboration at the speed and scale necessary to defend critical infrastructure and essential services; and,Defending and modernizing Federal networks and updating Federal incident response policy2. Disrupt and Dismantle Threat Actors – Using all instruments of national power, we will make malicious cyber actors incapable of threatening the national security or public safety of the United States, including by:Strategically employing all tools of national power to disrupt adversaries; Engaging the private sector in disruption activities through scalable mechanisms; and, Addressing the ransomware threat through a comprehensive Federal approach and in lockstep with our international partners.3. Shape Market Forces to Drive Security and Resilience – We will place responsibility on those within our digital ecosystem that are best positioned to reduce risk and shift the consequences of poor cybersecurity away from the most vulnerable in order to make our digital ecosystem more trustworthy, including by:Promoting privacy and the security of personal data;Shifting liability for software products and services to promote secure development practices; and,Ensuring that Federal grant programs promote investments in new infrastructure that are secure and resilient.4. Invest in a Resilient Future – Through strategic investments and coordinated, collaborative action, the United States will continue to lead the world in the innovation of secure and resilient next-generation technologies and infrastructure, including by:Reducing systemic technical vulnerabilities in the foundation of the Internet and across the digital ecosystem while making it more resilient against transnational digital repression;Prioritizing cybersecurity R&D for next-generation technologies such as postquantum encryption, digital identity solutions, and clean energy infrastructure; and, Developing a diverse and robust national cyber workforce5. Forge International Partnerships to Pursue Shared Goals – The United States seeks a world where responsible state behavior in cyberspace is expected and reinforced and where irresponsible behavior is isolating and costly, including by:Leveraging international coalitions and partnerships among like-minded nations to counter threats to our digital ecosystem through joint preparedness, response, and cost imposition;Increasing the capacity of our partners to defend themselves against cyber threats, both in peacetime and in crisis; and,Working with our allies and partners to make secure, reliable, and trustworthy global supply chains for information and communications technology and operational technology products and services.Coordinated by the Office of the National Cyber Director, the Administration’s implementation of this Strategy is already underway.###
March 07, 2023
Suspected ransomware crew arrested in multi-country swoop • The Register
German and Ukrainian cops have arrested suspected members of the DoppelPaymer ransomware crew and issued warrants for three other "masterminds" behind the global operation that extorted tens of millions of dollars and may have led to the death of a hospital patient.The criminal gang, also known as Indrik Spider, Double Spider and Grief, used double-extortion tactics. Before they encrypt the victims' systems, the crooks steal sensitive data and then threaten to publish the information on their leak site if the organization doesn't pay up. German authorities are aware of 37 companies that fell victim to these criminals, including the University Hospital in Düsseldorf. That 2020 ransomware attack against the hospital led to a patient's death after the malware shut down the emergency department forcing the staff to divert the woman's ambulance to a different medical center. US law enforcement has also linked DoppelPaymer to Russia's Evil Corp, which the Treasury Department sanctioned in 2019. The US FBI also assisted in the raids and arrests, and Europol noted that American victims of DoppelPaymer paid at least €40 million ($43million) to the crooks between May 2019 and March 2021. In simultaneous actions on February 28, German police arrested a local suspect the cops say "played a major role" in the ransomware gang and seized equipment from the suspect's home. Meanwhile, Ukrainian police arrested a local man who is also believed to be a core member of DoppelPaymer. During searches in Kiev and Kharkiv, the Ukrainian cops also seized electronic equipment now under forensic examination. Small fry arrested, but big fish swim awayAdditionally, the cops issued arrest warrants for three "suspected masterminds" behind the Russian-connected ransomware gang. The trio has also been added to Europe's most wanted list:lgor Olegovich Turashev allegedly acted as the administrator of the gang's IT infrastructure and malware, according to German police. Turashev is also wanted by the FBI for his alleged role in Evil Corp.Irina Zemlianikina "is also jointly responsible for several cyber attacks on German companies," the cops said. She allegedly administered the gang's chat and leak sites and sent malware-laden emails to infect victims' systems.The third suspect, Igor Garshin (alternatively: Garschin) is accused of spying on victim companies as well as encrypting and stealing their data.DoppelPaymer has been around since 2019, when criminals first started using the ransomware to attack critical infrastructure, health-care facilities, school districts and governments. It's based on BitPaymer ransomware and is part of the Dridex malware family, but with some interesting adaptations.According to Europol, DoppelPaymer ransomware used a unique evasion tool to shut down security-related processes of the attacked systems, and these attacks also relied on the prolific Emotet botnet. Criminals distributed their malware through various channels, including phishing and spam emails with attached documents containing malicious code — either JavaScript or VBScript.Last fall, after rebranding as Grief, the gang infected the National Rifle Association and was linked to the attack on Sinclair Broadcast Group, a telecommunications conglomerate that owns a huge swath of TV stations in the US. ®  
February 23, 2023
How Data Engineers Tame Big Data? - Dataconomy
Data engineers play a crucial role in managing and processing big data. They are responsible for designing, building, and maintaining the infrastructure and tools needed to manage and process large volumes of data effectively. This involves working closely with data analysts and data scientists to ensure that data is stored, processed, and analyzed efficiently to derive insights that inform decision-making.What is data engineering?Data engineering is a field of study that involves designing, building, and maintaining systems for the collection, storage, processing, and analysis of large volumes of data. In simpler terms, it involves the creation of data infrastructure and architecture that enable organizations to make data-driven decisions.Data engineering has become increasingly important in recent years due to the explosion of data generated by businesses, governments, and individuals. With the rise of big data, data engineering has become critical for organizations looking to make sense of the vast amounts of information at their disposal.In the following sections, we will delve into the importance of data engineering, define what a data engineer is, and discuss the need for data engineers in today’s data-driven world.Job description of data engineersData engineers play a critical role in the creation and maintenance of data infrastructure and architecture. They are responsible for designing, developing, and maintaining data systems that enable organizations to efficiently collect, store, process, and analyze large volumes of data. Let’s take a closer look at the job description of data engineers:Designing, developing, and maintaining data systemsData engineers are responsible for designing and building data systems that meet the needs of their organization. This involves working closely with stakeholders to understand their requirements and developing solutions that can scale as the organization’s data needs grow.Collecting, storing, and processing large datasetsData engineers are also responsible for collecting, storing, and processing large volumes of data. This involves working with various data storage technologies, such as databases and data warehouses, and ensuring that the data is easily accessible and can be analyzed efficiently.Implementing data security measuresData security is a critical aspect of data engineering. Data engineers are responsible for implementing security measures that protect sensitive data from unauthorized access, theft, or loss. They must also ensure that data privacy regulations, such as GDPR and CCPA, are followed.Data engineers play a crucial role in managing and processing big dataEnsuring data quality and integrityData quality and integrity are essential for accurate data analysis. Data engineers are responsible for ensuring that the data collected is accurate, consistent, and reliable. This involves creating data validation rules, monitoring data quality, and implementing processes to correct any errors that are identified.Creating data pipelines and workflowsData engineers create data pipelines and workflows that enable data to be collected, processed, and analyzed efficiently. This involves working with various tools and technologies, such as ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes, to move data from its source to its destination. By creating efficient data pipelines and workflows, data engineers enable organizations to make data-driven decisions quickly and accurately.How does workflow automation help different departments?Challenges faced by data engineers in managing and processing big dataAs data continues to grow at an exponential rate, it has become increasingly challenging for organizations to manage and process big data. This is where data engineers come in, as they play a critical role in the development, deployment, and maintenance of data infrastructure. However, data engineering is not without its challenges. In this section, we will discuss the top challenges faced by data engineers in managing and processing big data.Data engineers are responsible for designing and building the systems that make it possible to store, process, and analyze large amounts of data. These systems include data pipelines, data warehouses, and data lakes, among others. However, building and maintaining these systems is not an easy task. Here are some of the challenges that data engineers face in managing and processing big data:Data volume: With the explosion of data in recent years, data engineers are tasked with managing massive volumes of data. This requires robust systems that can scale horizontally and vertically to accommodate the growing data volume.Data variety: Big data is often diverse in nature and comes in various formats such as structured, semi-structured, and unstructured data. Data engineers must ensure that the systems they build can handle all types of data and make it available for analysis.Data velocity: The speed at which data is generated, processed, and analyzed is another challenge that data engineers face. They must ensure that their systems can ingest and process data in real-time or near-real-time to keep up with the pace of business.Data quality: Data quality is crucial to ensure the accuracy and reliability of insights generated from big data. Data engineers must ensure that the data they process is of high quality and conforms to the standards set by the organization.Data security: Data breaches and cyberattacks are a significant concern for organizations that deal with big data. Data engineers must ensure that the data they manage is secure and protected from unauthorized access.Volume: Dealing with large amounts of dataOne of the most significant challenges that data engineers face in managing and processing big data is dealing with large volumes of data. With the growing amount of data being generated, organizations are struggling to keep up with the storage and processing requirements. Here are some ways in which data engineers can tackle this challenge:Impact on infrastructure and resourcesLarge volumes of data put a strain on the infrastructure and resources of an organization. Storing and processing such vast amounts of data requires significant investments in hardware, software, and other resources. It also requires a robust and scalable infrastructure that can handle the growing data volume.Solutions for managing and processing large volumes of dataData engineers can use various solutions to manage and process large volumes of data. Some of these solutions include:Distributed computing: Distributed computing systems, such as Hadoop and Spark, can help distribute the processing of data across multiple nodes in a cluster. This approach allows for faster and more efficient processing of large volumes of data.Cloud computing: Cloud computing provides a scalable and cost-effective solution for managing and processing large volumes of data. Cloud providers offer various services such as storage, compute, and analytics, which can be used to build and operate big data systems.Data compression and archiving: Data engineers can use data compression and archiving techniques to reduce the amount of storage space required for large volumes of data. This approach helps in reducing the costs associated with storage and allows for faster processing of data.Velocity: Managing high-speed data streamsAnother challenge that data engineers face in managing and processing big data is managing high-speed data streams. With the increasing amount of data being generated in real-time, organizations need to process and analyze data as soon as it is available. Here are some ways in which data engineers can manage high-speed data streams:Impact on infrastructure and resourcesHigh-speed data streams require a robust and scalable infrastructure that can handle the incoming data. This infrastructure must be capable of handling the processing of data in real-time or near-real-time, which can put a strain on the resources of an organization.Solutions for managing and processing high velocity dataData engineers can use various solutions to manage and process high-speed data streams. Some of these solutions include:Stream processing: Stream processing systems, such as Apache Kafka and Apache Flink, can help process high-speed data streams in real-time. These systems allow for the processing of data as soon as it is generated, enabling organizations to respond quickly to changing business requirements.In-memory computing: In-memory computing systems, such as Apache Ignite and SAP HANA, can help process high-speed data streams by storing data in memory instead of on disk. This approach allows for faster access to data, enabling real-time processing of high-velocity data.Edge computing: Edge computing allows for the processing of data at the edge of the network, closer to the source of the data. This approach reduces the latency associated with transmitting data to a central location for processing, enabling faster processing of high-speed data streams.With the rise of big data, data engineering has become critical for organizations looking to make sense of the vast amounts of information at their disposalVariety: Processing different types of dataOne of the significant challenges that data engineers face in managing and processing big data is dealing with different types of data. In today’s world, data comes in various formats and structures, such as structured, unstructured, and semi-structured. Here are some ways in which data engineers can tackle this challenge:Impact on infrastructure and resourcesProcessing different types of data requires a robust infrastructure and resources capable of handling the varied data formats and structures. It also requires specialized tools and technologies for processing and analyzing the data, which can put a strain on the resources of an organization.Solutions for managing and processing different types of dataData engineers can use various solutions to manage and process different types of data. Some of these solutions include:Data integration: Data integration is the process of combining data from various sources into a single, unified view. It helps in managing and processing different types of data by providing a standardized view of the data, making it easier to analyze and process.Data warehousing: Data warehousing involves storing and managing data from various sources in a central repository. It provides a structured and organized view of the data, making it easier to manage and process different types of data.Data virtualization: Data virtualization allows for the integration of data from various sources without physically moving the data. It provides a unified view of the data, making it easier to manage and process different types of data.Veracity: Ensuring data accuracy and consistencyAnother significant challenge that data engineers face in managing and processing big data is ensuring data accuracy and consistency. With the increasing amount of data being generated, it is essential to ensure that the data is accurate and consistent to make informed decisions. Here are some ways in which data engineers can ensure data accuracy and consistency:Impact on infrastructure and resourcesEnsuring data accuracy and consistency requires a robust infrastructure and resources capable of handling the data quality checks and validations. It also requires specialized tools and technologies for detecting and correcting errors in the data, which can put a strain on the resources of an organization.Solutions for managing and processing accurate and consistent dataData engineers can use various solutions to manage and process accurate and consistent data. Some of these solutions include:Data quality management: Data quality management involves ensuring that the data is accurate, consistent, and complete. It includes various processes such as data profiling, data cleansing, and data validation.Master data management: Master data management involves creating a single, unified view of master data, such as customer data, product data, and supplier data. It helps in ensuring data accuracy and consistency by providing a standardized view of the data.Data governance: Data governance involves establishing policies, procedures, and controls for managing and processing data. It helps in ensuring data accuracy and consistency by providing a framework for managing the data lifecycle and ensuring compliance with regulations and standards.Big data is often diverse in nature and comes in various formats such as structured, semi-structured, and unstructured dataSecurity: Protecting sensitive dataOne of the most critical challenges faced by data engineers in managing and processing big data is ensuring the security of sensitive data. As the amount of data being generated continues to increase, it is essential to protect the data from security breaches that can compromise the data’s integrity and reputation. Here are some ways in which data engineers can tackle this challenge:Impact of security breaches on data integrity and reputationSecurity breaches can have a significant impact on an organization’s data integrity and reputation. They can lead to the loss of sensitive data, damage the organization’s reputation, and result in legal and financial consequences.Solutions for managing and processing data securelyData engineers can use various solutions to manage and process data securely. Some of these solutions include:Encryption: Encryption involves converting data into a code that is difficult to read without the proper decryption key. It helps in protecting sensitive data from unauthorized access and is an essential tool for managing and processing data securely.Access controls: Access controls involve restricting access to sensitive data based on user roles and permissions. It helps in ensuring that only authorized personnel have access to sensitive data.Auditing and monitoring: Auditing and monitoring involve tracking and recording access to sensitive data. It helps in detecting and preventing security breaches by providing a record of who accessed the data and when.In addition to these solutions, data engineers can also follow best practices for data security, such as regular security assessments, vulnerability scanning, and threat modeling.Cyberpsychology: The psychological underpinnings of cybersecurity risksBest practices for overcoming challenges in big data management and processingTo effectively manage and process big data, data engineers need to adopt certain best practices. These best practices can help overcome the challenges discussed in the previous section and ensure that data processing and management are efficient and effective.Data engineers play a critical role in managing and processing big data. They are responsible for ensuring that data is available, secure, and accessible to the right people at the right time. To perform this role successfully, data engineers need to follow best practices that enable them to manage and process data efficiently.Adopting a data-centric approach to big data managementAdopting a data-centric approach is a best practice that data engineers should follow to manage and process big data successfully. This approach involves putting data at the center of all processes and decisions, focusing on the data’s quality, security, and accessibility. Data engineers should also ensure that data is collected, stored, and managed in a way that makes it easy to analyze and derive insights.Investing in scalable infrastructure and cloud-based solutionsAnother best practice for managing and processing big data is investing in scalable infrastructure and cloud-based solutions. Scalable infrastructure allows data engineers to handle large amounts of data without compromising performance or data integrity. Cloud-based solutions offer the added benefit of providing flexibility and scalability, allowing data engineers to scale up or down their infrastructure as needed.In addition to these best practices, data engineers should also prioritize the following:Data Governance: Establishing data governance policies and procedures that ensure the data’s quality, security, and accessibility.Automation: Automating repetitive tasks and processes to free up time for more complex tasks.Collaboration: Encouraging collaboration between data engineers, data analysts, and data scientists to ensure that data is used effectively.Leveraging automation and machine learning for data processingAnother best practice for managing and processing big data is leveraging automation and machine learning. Automation can help data engineers streamline repetitive tasks and processes, allowing them to focus on more complex tasks that require their expertise. Machine learning, on the other hand, can help data engineers analyze large volumes of data and derive insights that might not be immediately apparent through traditional analysis methods.Managing and processing big data can be a daunting task for data engineersImplementing strong data governance and security measuresImplementing strong data governance and security measures is crucial to managing and processing big data. Data governance policies and procedures can ensure that data is accurate, consistent, and accessible to the right people at the right time. Security measures, such as encryption and access controls, can prevent unauthorized access or data breaches that could compromise data integrity or confidentiality.Establishing a culture of continuous improvement and learningFinally, data engineers should establish a culture of continuous improvement and learning. This involves regularly reviewing and refining data management and processing practices to ensure that they are effective and efficient. Data engineers should also stay up-to-date with the latest tools, technologies, and industry trends to ensure that they can effectively manage and process big data.In addition to these best practices, data engineers should also prioritize the following:Collaboration: Encouraging collaboration between data engineers, data analysts, and data scientists to ensure that data is used effectively.Scalability: Investing in scalable infrastructure and cloud-based solutions to handle large volumes of data.Flexibility: Being adaptable and flexible to changing business needs and data requirements.ConclusionManaging and processing big data can be a daunting task for data engineers. The challenges of dealing with large volumes, high velocity, different types, accuracy, and security of data can make it difficult to derive insights that inform decision-making and drive business success. However, by adopting best practices, data engineers can successfully overcome these challenges and ensure that data is effectively managed and processed.In conclusion, data engineers face several challenges when managing and processing big data. These challenges can impact data integrity, accessibility, and security, which can ultimately hinder successful data-driven decision-making. It is crucial for data engineers and organizations to prioritize best practices such as adopting a data-centric approach, investing in scalable infrastructure and cloud-based solutions, leveraging automation and machine learning, implementing strong data governance and security measures, establishing a culture of continuous improvement and learning, and prioritizing collaboration, scalability, and flexibility.By addressing these challenges and prioritizing best practices, data engineers can effectively manage and process big data, providing organizations with the insights they need to make informed decisions and drive business success. If you want to learn more about data engineers, check out article called: “Data is the new gold and the industry demands goldsmiths.”
March 02, 2023
Closing the Cybersecurity Talent Gap
Despite recent layoffs announced by Amazon, Google, Microsoft, and others, some tech professionals remain in short supply, particularly skilled and creative cybersecurity experts. To find the professionals needed to protect their systems against cyberattacks, IT leaders are increasingly turning to various creative approaches.Cybersecurity talent remains in high demand for 2023 and is predicted to remain in demand for the foreseeable, says Doug Glair, cybersecurity director with technology research and advisory firm ISG. “To address this challenge, companies must leverage traditional HR recruiting, hiring, and retention strategies, along with some non-traditional strategies, to address the ongoing demand.”Always network with relevant contacts in your field, advises John Burnet, vice president of global talent at AI-based SaaS platform provider Armis. “Whether the need is right now or around the corner, proactivity is the name of the game when looking for great talent.”To succeed in today's competitive cybersecurity job market, organizations must look for talent in adjacent fields, both externally and within their own organization, says Jon Check, executive director of cyber protection solutions at Raytheon Intelligence & Space. “Employees who are looking to change career paths, or simply try a different role within the cybersecurity industry, can be ideal candidates for additional security training,” he explains.Qualifications and CertificationsAs always, the most sought-after cybersecurity professionals are those with the strongest credentials.“Certifications such as CISSP and CISM demonstrate that individuals have technical capability and are putting effort into their careers,” says Richard Watson-Bruhn, privacy and cyber security expert at professional services firm PA Consulting.It pays to be flexible when facing a scarce candidate market. “Over the past few years, we've learned that a cyber degree or typical cyber background isn’t necessarily a requirement to be a successful security professional,” Check says. “What matters … are the characteristics or ‘soft skills’ that an employee exhibits.” An intelligent, promising candidate can acquire specific skills by working alongside experienced colleagues.Meanwhile, many enterprises will only hire people with proven cyber experience. “This dramatically shrinks the candidate ocean into a candidate pool,” Burnet observes. He notes that it's better to focus on values, traits, and behaviors rather than a degree or dated qualification. Burnet also advises leaders to reevaluate their organizations' onboarding program “to give promising new hires the best experience and accelerated learning journey.”Fresh Approaches to Candidate SearchesCybersecurity is often viewed as just another technical talent field, yet candidates are expected to possess a wide range of rapidly evolving knowledge and skills. When filling staffing gaps, leaders should examine the skill sets that are missing from their current team, such as creative problem solving, stakeholder communications, buy-in development, and change enablement. “Look for candidates who will help balance out existing team skills as opposed to individuals who match a specific technical qualification,” Glair says.Before hiring can begin, it's necessary to attract suitable candidates. Initial search steps should include website updates and social media posts, Glair says. He also suggests creating an internal “cybersecurity academy” that will build talent from within the organization. “This should include the technical, process, communications, and leadership skills needed to address today’s cybersecurity challenges,” Glair notes.Burnet recommends sponsoring a “sourcing jam.” “That means getting recruiters and/or hiring managers in a room together ... to trawl through their networks and get them to personally reach out.”It's easy to forget that cybersecurity is still a relatively new field. “There are many people who couldn’t, or didn't, discover cybersecurity as a first career, but have all the right talents to excel in the field,” Watson-Bruhn says. “Retraining programs can find people who perhaps have a first career in marketing or teaching, who can become skilled members of the team and bring wider knowledge and different views from their first career.”Possible PitfallsFlexibility is essential when searching for cybersecurity candidates. Requiring individuals to meet all of the criteria set can lead to finding nobody or individuals who think alike with similar backgrounds to the person setting the criteria, Watson-Bruhn warns. Meanwhile, flexibility can sometimes lead to pleasant surprises. “Often, the best talent ends up missing something you expected in one area, but brings something completely new,” he says.Another common mistake is restricting talent searches to individuals with traditional academic backgrounds. “While there are many distinguished university programs that are specifically focused on preparing students to enter the cyber workforce, often … these programs can’t fully train the students on the hard skills they will need for their future cyber careers,” Check says. This apparent drawback actually provides the opportunity to hire candidates with other types of academic degrees, which can be complemented by on-the-job cyber training. “By overlooking this group, organizations are limiting the potential these new nontraditional hires could bring to their companies,” he notes.Approaches for attracting, hiring, and retaining cybersecurity talent should be embedded into every enterprise’s cybersecurity strategy. “This means investing in cultivating, maintaining, and evolving the culture of the organization so people -- the most important asset -- are top priority,” Glair says. “This includes focusing on recognition, rewards, flexible work practices, clear progression paths, open communications and feedback, performance-based incentives, and learning and development programs.”What to Read Next:6 Worthless Security Tactics That Won't Go AwayCISO Budget Constraints Drive Consolidation of Security ToolsWhat Ukraine's IT Industry Can Teach CIOs About Resilience
February 28, 2023
4 Trends To Expect From The Cyber Landscape In 2023 - Forbes
Wendi Whitmore is the Senior Vice President of Unit 42 at Palo Alto Networks.gettyThe new year is upon us, and with it comes great potential for threat actors and cybersecurity professionals. In 2022, we saw the threat landscape become more complex than ever with groups like Vice Society, Trident Ursa (aka Gamaredon) and Ransom Cartel leveraging a wide variety of tactics to exploit their victims and wreak havoc. Despite this, cybersecurity professionals are rising to the challenge and pushing back on these threats using both tried and true methods and new innovations.As we look ahead, here are my predictions for how the cybersecurity landscape will evolve in 2023.1. More people will get involved in cybercrime to make ends meet.Economic conditions continue to fluctuate, with inflation causing financial stress for many. Unfortunately, more than 100,000 tech workers were laid off from over 300 companies in 2023 alone. Consequently, the pressures of the financial burden may push more people to turn to cybercrime to make ends meet and stay afloat.Advancements in technology have made the barrier to entry to becoming a threat actor relatively low, with hacking-as-a-service-based platforms easy to access and leverage. The widespread availability of other attack frameworks also encourages unskilled threat actors to accomplish low-level attacks easily. With all of these factors combined, 2023 may bring a new wave of people turning to cybercrime.2. Off-the-shelf tools will lower barriers to entry into cybercrime.For threat actors, it is simple to obtain and deploy tools, given the wide variety of options readily available to purchase or free to download. A rising example is Brute Ratel C4, a new red-teaming and adversarial attack simulation tool that hit the market in 2022 and is quickly becoming the next Cobalt Strike for attackers.For a licensing fee, threat actors could purchase and deploy this tool that was designed to avoid detection by endpoint detection and response (EDR) and antivirus (AV) products. Nation-state actors have leveraged the ease of access to off-the-shelf tools, taking advantage of the quick deployment and stealthy cover that a widely available tool provides. For example, last year we observed nation-states using Brute Ratel C4 to carry out their attacks. We expect to see increased usage of commercially and freely available tools, especially from nation-state threat actors.3. Ransomware will run rampant for the full spectrum of threat actors.Ransomware is on the rise, accounting for 34% of all cyber insurance claims during the first half of 2022. With the ability to deliver a significant payoff (ransom demands soar as high as $30 million), ransomware is becoming an especially attractive tactic amid today’s economic uncertainty and financial insecurity. Additionally, the adoption of ransomware-as-a-service (RaaS) provides an avenue for less sophisticated threat actors to be equipped with more sophisticated tools.While entry-level threat actors can easily exploit RaaS for quick attacks, big players leverage ransomware for larger schemes. In 2022 alone, we saw a great deal of activity from groups like Cuba Ransomware, BlueSky, Black Basta and more, with devastating effects. As we plan for the new year, we expect ransomware to continue to be a popular avenue for threat actors of all experience levels.4. The window of time to patch high-profile vulnerabilities before exploitation will continue to shrink.Amid a rise in cybercrime and ransomware attacks, organizations will need to position themselves to identify potential attack vectors faster and address vulnerabilities within their security infrastructures to ensure they have the best protection. More than 22,000 vulnerabilities were discovered in 2022—that’s just over 60 per day. Many of these can be found in popular software platforms such as the FabricScape vulnerability we identified last year.As threat actors become more sophisticated, they quickly exploit these new vulnerabilities. This makes it imperative for organizations to identify and patch vulnerabilities with urgency, thus reducing the risk of suffering an attack.Mitigating The RisksWhile we expect activity in the threat landscape to increase this year, there are ways to mitigate these risks. Organizations should consider implementing a true zero-trust framework to secure their valuable employees, systems and data to address the needs of today's ever-evolving threat landscape. Organizations can better protect their valuable and vulnerable data, systems and employees by removing implicit trust and setting up comprehensive security controls.With the shift to remote work, employees are working from almost anywhere—from home to hotels, to coffee shops and more. While the flexibility is appreciated by many, this has made it more difficult to verify that all connections are from trusted employees. Increased remote workers combined with the fact that many cloud services used by employees are directly facing the internet put an organization at risk. This makes a zero-trust framework even more important to consider.Additionally, ensure multifactor authentication (MFA) is implemented as widely as possible. Organizations should layer in MFA forms such as biometric checks, hardware keys or certificates to help keep themselves protected. By leveraging these strategies and technologies, organizations can ensure they have the proper security to meet the challenges of modern threats, applications and more.Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
February 28, 2023
Most ransomware payments go on to fund many further attacks - TechRadar
When a threat actor manages to extort money out of a ransomware victim, they rarely use the cash to take a holiday - but instead use the newly acquired funds to finance more cybercriminal activities, new research has found. A report (opens in new tab) from by Trend Micro claims that while just 10% of ransomware victims end up paying the ransom, the money paid often gets used in future attacks.The report also found that the victims that agree to pay the ransom usually do it quickly, and are often forced to pay more per incident. Funding more attacksWhat’s more, although the risk is not homogenous and differs between sectors, company size, countries, etc. - there is a dose of similarity between them. Namely, victims in some countries, and some verticals, usually pay a higher demand than others, and that makes them a more popular target among attackers. Usually, businesses are advised against paying the ransom. The payment does not guarantee they’ll get their data back, even partially. At the same time, it motivates the attackers to continue with their ransomware operations. And finally - there is no guarantee that the same organization will not be targeted again - by the same threat actor, or someone completely different.Trend Micro also added that paying the ransom “often only results in driving up the overall cost of the incident with few other benefits”.Instead, the companies should build their infrastructure and be prepared for potential attacks. The best time of year to do so is in January, and July-August, as those are the periods when ransomware monetization activities are at their lowest, the researchers said.“By prioritizing protection left of the kill chain, continuing in-depth analysis of the ransomware ecosystems, and focusing global efforts on reducing the percentage of victims paying,” businesses could make ransomware attacks less profitable for the attackers. 
February 26, 2023
Green and Affordable: Reducing Electricity Costs by Archiving Data on Tape - dotmagazine
Man-made climate change can no longer be denied. Even more: it is progressing faster than originally thought. According to the IPCC, global greenhouse gas emissions must fall immediately if the goals of the Paris Climate Agreement are to be achieved. By 2050, net global CO2 emissions must have been reduced to zero.1This means that all greenhouse gas emissions caused by humans must be removed from the atmosphere through reduction measures so that the Earth’s climate balance returns to zero. This would make humanity climate neutral and stabilize the global temperature. CO2 emissions are particularly high in the IT and telecommunications industry, even twice as high as in the aviation industry, as calculated by the Boston Consulting Group.2 To delve into more detail: data centers as well as complex raw material and production chains are currently responsible for three to four percent of global CO2 emissions, while air traffic accounts for two percent. While the aviation industry is working to reduce CO2 emissions through more modern machines, the IT sector’s share will rise sharply due to growing data traffic and could reach 14 percent of global CO2 emissions by 2040.3But there is a way to significantly reduce CO2 emissions in the IT sector and at the same time significantly lower the company’s costs. Because being ecological does not have to mean expensive. The IT sector, or more precisely the field of data archiving, should consider more opportunities to save electricity and CO2 emissions through tape technology.How to lower CO2 emissions in the IT sectorWherever large amounts of data need to be stored long-term, tape media offer a way to drastically reduce electricity costs for saving data. Archiving is synonymous with an accumulation of stored data because, in contrast to short-term backup, no data is deleted or overwritten. According to legal rules and regulations, archival data must be stored in such a way that it can be accessed at any time, whether it is after three months or 30 years. Both together lead to the fact that enormously large amounts of data accumulate quickly. Preparing storage for these large amounts of data is costly. Many companies would like to do business more ecologically, but shy away from the costs involved in doing business in a more environmentally friendly way. Hard disks are well-used for short-term backup because of faster access times, but by using tape storage for long-term archive instead of hard disks, this goal can be achieved: Businesses can significantly improve their environmental footprint and reduce costs at the same time. Because wherever large amounts of data need to be stored long-term, tape media offer a way to drastically reduce electricity costs for saving data. When we look at the most recent studies by IDC, it becomes apparent as to how urgent a cost-effective solution for archiving data currently is and will become. While the independent market research and consulting company still assumed in 2019 that the amount of stored data would increase to 7.5 ZB by 2025 and double every two to three years by 2025, in a study in 2021, IDC already predicted a data volume of around 17 ZB to be stored in 2025.4The growth is driven by IoT, analytics, 5G networks and the video sector, but the automotive sector with its plans for autonomous driving is also contributing to immense data growth. Much of this data is unstructured and, as it ages, it is rarely accessed. Industry analysts estimate that up to 60 - 80 percent of stored data is rarely accessed.5 Moving this cold data to tape storage offers a huge opportunity to reduce electricity costs and CO2 emissions.Unlike hard disks, tape technology consumes hardly any power except during the writing and reading process. Hard disks, on the other hand, need to be both powered and cooled around the clock because the stored data must be constantly validated and, if necessary, rewritten due to permanent demagnetization to prevent data loss. If the data is stored on tape, no electricity is required for its further storage. The written tape can simply be stored in the safe or remain in the library – the data remains safely stored. Since no heat energy is generated accordingly, much less cooling is required. Almost all of the energy consumed by a hard disk is converted into heat. According to recent studies, it can be assumed that around 27 percent of the energy required by data centers across Europe was spent on cooling and air conditioning in 2020.6Electricity consumption for cooling alone thus causes about 3.5 million tons of CO2 emissions, roughly the amount produced by burning 1.33 billion liters of diesel. By 2025, the energy demand is expected to increase to 16.4 TWh,7 which in turn would correspond to 5.6 tons of CO2. To absorb this increase, another medium-sized coal-fired power plant would be necessary. For this reason, energy-hungry server and HDD (hard disk drives) farms try to transfer as much data as possible in hyper-scale data centers to tape in order to minimize energy consumption. As a result of these demands, tape can become a pressure relief valve for unabated data center expansion.How the evolution of tape technology leads to lower CO2 emissions Over the past 10 years, tape technology has evolved tremendously. In 2014, the capacity for a tape of the LTO6 generation used at the time was 2.5 TB. With the latest generation, LTO9, the capacity is already 18 TB per tape, 7.2 times as much. At the same time, the transfer rates have improved significantly. For example, LTO6 offers a native transfer rate of 160 MB/s, LTO9 already have a transfer rate of 400 MB/s. The technology is continuously improving so that, in another ten years’ time, tape will have a native capacity of more than 100 TB. Since power is only consumed during the write and read processes, faster backup times inevitably lead to less power consumption and, in turn, lower CO2 emissions.Moving to tape has huge benefits for a company, regarding both sustainability and financial aspects, and magnetic tape is the eco-friendly choice for data storage. As we can see, moving to tape has huge benefits for a company, regarding both sustainability and financial aspects, and magnetic tape is the eco-friendly choice for data storage. Compared to other traditional technologies in data centers (such as HDDs), tape can drive a reduction in carbon emissions of as much as 95 percent.8According to IDC projections, by migrating global data storage to tape, the annual CO2 reduction by 2030 is 43.7 percent, with the potential to avoid a cumulative 664 million metric tons of carbon emissions between 2019 and 2030.9 Figure 1: Ten-Year CO2e emissions. Source: Brad Johns Consulting, LLC, 2021: Improving Information Technology Sustainability with Modern Tape Storage According to the IPCC’s 2021 World Climate Report, there is only 300 billion tons of CO2 left globally if we still want to meet the 1.5-degree target.10The migration of cold data to tape would thus make a major contribution in achieving this goal.  Since 2018, Melina Schnaudt is coordinating EMEA-wide communication about the developments of tape technology and the data archiving at the office located in Kleve. At the moment, she is developing content on new fields of tape technology, such as the ecological footprint of tape and the future challenges for data archiving.   Please note: The opinions expressed in Industry Insights published by dotmagazine are the author’s or interview partner’s own and do not necessarily reflect the view of the publisher, eco – Association of the Internet Industry.
February 27, 2023
Magnetic tape storage is seeing cloud go back to the future for its archival data needs
One February morning in 2011, 40,000 users of Google’s Gmail service awoke to find that they were, in fact, no longer users of Google’s Gmail service. Their emails had, in politer language than many of the search giant’s customers probably used at the time, completely vanished, thanks to a misconfigured software upgrade. Not to worry, Google assured this confused multitude: it had a backup plan. Hidden away in a far flung data centre rested hundreds of magnetic tape cartridges containing facsimiles of all the lost accounts. It took a little while, but eventually each and every account was restored – using essentially the same technology your parents used to make mix tapes, or record last week’s episode of ‘Coronation Street.’It’s a story all the more astonishing for the fact that, even now, magnetic tape drives serve exactly the same purpose for a growing number of companies, in spite of multiple predictions that the technology should have died a death a long time ago. “My first experience with tape was in the beginning of my career – that was in 1981,” recalls Phil Goodwin, a research director at IDC and an expert in digital storage. Even then, says Goodwin, people were saying tape was not long for this world. Those critics appear to have been silenced by recent sales figures, which show year-on-year shipments of hard disk drives (HDDs) sink by 34% in 2022, while consignments of magnetic tape drives rose by 14% – a total of 79.3 exabytes, or roughly equivalent to the entirety of data created on the internet every 32 days.This is in spite of the fact that HDDs still boast formidable storage capacities and retrieve data much faster than tape drives ever could. But the priorities of cloud providers have changed in recent years. Front of mind for hyperscalers, explains Goodwin, is the cost of storage, and when approximately 60% of all data is the kind of information that doesn’t need to be accessed with any urgency, how quickly you can reach the first byte of it suddenly matters a lot less.Tape also requires much less power to run than HDDs, chiming with the sustainability priorities of your AWS’s and Azure’s (although tape libraries do still tend to be housed in climate-controlled data centres.) And for those worried about cybersecurity, tape libraries are almost always air-gapped, and extremely difficult to tamper with. “The whole idea is that you can take data on magnetic tape, remove it from a library, put it into a vault or on a shelf or whatever, and it’s effectively saved from any external threats,” says Goodwin. The obituary of magnetic tape has been written several times in recent years – but sales of the storage medium have been up since 2021. (Photo by pryzmat / Shutterstock)Magnetic tape storage drives, HDDs walkAnother reason for tape’s renewed popularity is the fact that pace of innovation in HDD technology is slackening. Aerial density in hard disks is only growing now by up to 8% a year – a far cry from the glory days of the medium, when doubling capacity meant you only had to add an extra disk and two heads to each unit. Now, though, “there’s no space left in the HDD form factor to squeeze in more disks,” explains Mark Lantz, IBM’s manager for advanced tape technologies.  Capacity is the least of magnetic tape’s problems. “We’re basically doubling capacity every generation,” says Lantz. Specifically, that’s down to the fact that data tends to be written using larger bits on tape than in HDDs. This means that researchers can continue to innovate in the space by progressively reducing the size of the bits without compromising on the size of the tape – squeezing more out of an individual cartridge for longer.By using that strategy, says Lantz, magnetic tape has huge long-term potential as an archival storage medium. As such, he says, researchers can “continue scaling aerial density and capacity, probably, for about 15 to 20 more years before we run into the same fundamental physical challenges that HDD currently faces.”As such, argues Lantz, there isn’t currently any fundamental physical limit that’s been discovered yet for the storage capacity of tape. “There’s a huge potential to scale the capacity of these systems,” says Lantz. “Today, enterprise cartridges [containing] 20 terabytes, if we recorded 317 gigabits per square inch? That’s a potential cartridge capacity of 580 terabytes. So, half a petabyte in a single cartridge.” Content from our partners The simplicity of the technology is also a key attribute, argues Lantz. “It’s a serial write technology,” he says. “If you wanted to re-encrypt the tape, or to delete all of the data on tape, it takes a long time. And so, if somebody starts trying to interfere with the data in your tape library, basically overriding it all, it takes much longer to destroy the data on tape than on HDD.”View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team Even so, explains Dr Ioan Stefanovici of Microsoft Research, the truly determined attacker will persist in their efforts, despite these difficulties. “In the absence of proper cybersecurity defences, tape libraries are still potentially liable to malware or ransomware attacks,” says Stefanovici, “where the robot mechanism for tape delivery can be hijacked and made to insert specific tapes into drives for malicious access.”Experiments with magnetic tape at IBM Research’s lab in Zurich, Switzerland. The firm is one of many tech giants now open to the possibilities of this storage medium. (Photo by IBM Research / Flickr)Ceilings of innovationTape also lasts an incredibly long time – increasingly important in an age where more and more data is required by law to be squirrelled away for a rainy day, even if it doesn’t need to be immediately used. Data written onto cartridges up to 40 years ago can still be read back, explains Lantz, albeit using aging technology that’s light-years behind what’s currently available. One such case involved the retrieval of data from the first lunar landings hitherto assumed lost, but actually broadcast to an Australian radio telescope during the mission on 14-track tapes. The result was the emergence of video footage of the mission of much higher quality than had ever been seen before, obscuring somewhat that the equipment used to extract it was borrowed from museums and used by individuals who had previously been happily retired.While that’s a nice story, it also illustrates a long-term problem with magnetic tape: the uneven pace of innovation when it comes to building the machines capable of reading it. Such ‘device orphaning,’ says Stefanovici, combined with the inevitability of data decay, “can ultimately result in datasets sitting in long-term storage, potentially inaccessible, and at high risk of becoming lost.”Current magnetic tape storage technology would probably last just as long, explains Lantz, provided it was stored properly, though he recommends that interested companies migrate their data every couple of years to newer cartridges to harness the growing storage capacity of the medium. The cost of these upgrades is something Goodwin recommends that companies strongly consider when considering investing in tape. “It really is best practice to take the media that’s becoming obsolete and re-read and write it out to current generations of tape,” he says.And while capacity is expected to increase, slower progress has been made in extracting data. While the pace of streaming data has been continuously scaling, explains Lantz, the ‘time to first byte’ is still in the tens of seconds, rather than the tens of milliseconds experienced with HDD. “For what we call really hot data that’s being accessed a lot, we would recommend putting it on flash, because the IOPs performance is so much better than anything else,” says Lantz. As that data cools, that data should then be moved to HDD and, eventually, in tape libraries.Might breakthroughs in HDD technology push tape out of its place in the hierarchy of storage media? In a recent interview with TechWireAsia, AWS’ vice president for storage, edge and data services Wayne Duso expressed disdain for tape’s long-term prospects. “The need for deep data storage has not disappeared, but the solution needs to be simpler, easier, more cost-effective, and more efficient than tapes,” he said, touting the capabilities of AWS’s latest S3 Glacier solution. “I do not believe tapes are dead, and if someone wants to use tapes for their solution, that is fine. But the solution that tapes initially provided is no longer sufficient.”More experimental methods might also knock magnetic tape storage off its pedestal. DNA storage, for example, is predicted by some to reach a dollar per terabyte by the end of this decade, while Stefanovici pinpoints glass – specifically, silica – as an alternative that has practically zero power or environmental demands, is tamper-proof thanks to its write-once-read-many (WORM) nature, and could potentially continue working for hundreds of thousands of years, provided nobody decides to get swing-happy with any baseball bats in the data centre during that period.Goodwin is more sceptical. Simply put, he argues, many of these candidates simply aren’t as cost-effective, yet, as magnetic tape. Indeed, when Goodwin hears predictions that tape is about to be superseded by another storage technique, his mind drifts back to the marketing campaigns for holographic storage, the ‘tape killer’ of the 2000s that couldn’t attract enough venture capital to catch on.There’s no reason to believe magnetic tape drives will, eventually, stay retired and not keep getting pulled back into the line of fire for one last job. But for that to happen, says Goodwin, experimental media ultimately has to “exceed the advantages of tape – in terms of speed, and reliability, and cost.”
February 23, 2023
This ransomware group wants you to double-cross your insurer | SC Media
Another ransomware group has emerged to threaten organizations, and they're very interested in your insurance plan.What sets the HardBit group apart from the others is not its ransomware or TTPs — threat research published Feb. 20 by Varonis said it's unknown how the group gains initial access to victim networks — but rather the request for victims to tell them the maximum amount their insurance will cover for a ransom payment so they can demand the same amount.In an image posted by Varonis threat researchers to their blog, the ransom note makes an appeal to the victim to stick it to the insurance company “since the sneaky insurance agent purposefully negotiates so as not to pay for the insurance claim, only the insurance company wins in this situation."“To avoid all this and get the money on the insurance, be sure to inform us anonymously about the availability and terms of insurance coverage, it benefits both you and us, but it does not benefit the insurance company.”An image by Varonis shows part of a ransom note by HardBit asking a victim for insurance details. (Varonis)First observed in October, an updated version of HardBit ransomware was discovered by Varonis in late November. The group does not currently have a leak site. One cybersecurity expert contacted by SC Media said it was fascinating to see ransomware gangs evolve their business models. As insurers have adapted to price out the costs of paying a ransom versus recovery, cybercriminals are adapting their demands to ensure they get paid and don't go over that limit.“Ransomware gangs are businesses,” said Mike Parkin, senior technical engineer at Vulcan Cyber. “They are illegal and unethical, but they are businesses nonetheless.”  The biggest challenge to fighting ransomware are nation-states that continue to shelter and support the criminal operations, Parkins continued, adding that the groups will continue to evolve until there is effective cooperation in the international law enforcement community.Melissa Bischoping, director of endpoint security at Tanium, cautioned victims not to share details of their insurance with threat actors since it may result in a denied claim. “As threat actors begin to view insured victims as a guaranteed payment source, I’d expect and hope to see regulation and/or legislation to prevent abuse of the system such as HardBit’s tactics,” said Bishoping.See Varonis’ post for more technical information about HardBit 2.0 and indicators of compromise.
February 22, 2023
Manufacturers Are the Top Target for Ransomware Attacks -
Manufacturers are getting hit hardest by ransomware attacks. Even as attacks are down and responses to the attacks have improved, ransomware continues to be an issue in manufacturing.IBM Security’s annual X-Force Threat Intelligence Index this year shows that incidents declined 4% from 2021 to 2022, and defense efforts were more successful in detecting and preventing ransomware. Yet the 2023 report showed that manufacturing was the most extorted last year, and the most attacked for the 2nd consecutive year, accounting for about 1 in 4 attacks in 2022.Related: 7 Simple Ways to Protect Yourself Against RansomwareIBM SecurityManufacturers Can’t Stand DowntimeManufacturing organizations are an attractive target for extortion since they have an extremely low tolerance for downtime. According to the National Association of Manufacturers (NAM), ransomware attackers often target manufacturers by disabling their operations technology and blackmailing them into paying to restore the functionality of their systems. Manufacturers that cannot afford to have production halted by hacks often have no choice but to pay the hackers’ ransom. NAM noted that manufacturers need to take steps to modernize and secure their IT and OT systems to avoid attacks.Related: Can ML Hardware Really Detect Ransomware? Colonial Pipeline Says YesIBM Security’s report revealed the stats behind attacks on manufacturers:Manufacturers Hard-Hit by Extortion. At 27%, extortion was the #1 impact of cyberattacks in 2022, data theft followed closely behind at 19%. Of all industries, manufacturing was the most extorted last year, and the most attacked for the 2nd consecutive year, accounting for about 1 in 4 attacks in 2022. Ransomware and backdoor deployments together made up more than half of all incidents observed in 2022.OT systems are low-hanging fruit for attackers. OT systems are often difficult or impossible to patch, making them highly susceptible to older threats, which cybercriminals are increasingly exploiting. Even with a drop in ICS vulnerabilities reported in 2022, vulnerability exploitation remained one of the top causes of cyberattacks on manufacturing in 2022.Ransomware: Too Big to Fail.  Backdoor deployments were the top attacker action last year, and about 67% of those cases were failed ransomware attacks (where defenders were able to disrupt the backdoor before the ransomware was deployed). Even with improved defenses, the impact was minimal with ransomware’s share of incidents declining only 4 percentage points in 2022. 
February 22, 2023
The Air-Gapped, Immutable Storage Future is Now - ITPro Today
In today’s environment, most businesses know it’s only a matter of time before they will be hit with malware, including ransomware. More often than ever, these malicious attacks are targeting backups. A May 2022 report from Veeam, for example, found that 94% of attackers attempted to destroy backup repositories. In 72% of the cases, their efforts were at least partially successful. With backups and storage clearly in the crosshairs, organizations need to address the risks head on. Yet, a 2022 survey by Pure Storage found that only 49% of organizations take extra measures of protection for their backup copies.Related: Data Storage Market Trends to Watch in 2023To combat this growing problem, proactive companies are turning to three basic solutions: tape, immutability, and air gapping. While none of these technologies are new, they can be effective, especially when combined.Pros and Cons of Tape Storage and BackupTape is the oldest and most maligned method of data backup and storage. However, because it is offline, it is intrinsically air-gapped and immutable. What’s more, it’s often stored offsite. Tape also provides write-once-read-many (WORM) technology, so it can never be overwritten or deleted. Some tape vendors have even upped the ante with additional protections. For example, Quantum Corp. now lets users of some of its Scalar tape libraries set tapes to eject partially, creating a physical air gap. That way, the tapes can’t be seen or chosen by a malicious bot. Tape does have its drawbacks. A notable drawback is that tape only works for data that’s no longer used or used very infrequently. Most businesses today don’t want the hassle or labor involved in tape storage and backup. They would much prefer cloud-based or at least data center-based technology – a combination of cloud, virtual, and physical. Why Immutability Is ImportantKeeping data safe in any environment requires immutability and/or air gapping. Immutability means that files can’t be modified during a set retention time, making it ideal for data that must be preserved intact for long periods. Businesses can set immutability to expire or remain in place indefinitely. When the immutability does expire, the data can be accessed or deleted, according to the rules set. Immutable backup and storage offer multiple benefits. In terms of security, immutability will protect data against malicious actors. However, immutability can also help avoid accidental file deletion or modification, improve compliance and data authenticity, speed up disaster recovery times, and protect backups against retention policy changes and deletion of restore points. The immutability concept has been associated traditionally with object storage because object storage is intrinsically immutable. It’s also standard today for object storage to employ object locking, which is the same type of technology that most immutable technology use. Additionally, because object storage systems essentially split files up into thousands of encoded and encrypted pieces, object storage will usually stymie hackers.While these capabilities are valuable, object storage doesn’t work in every scenario. File and block storage systems, for example, are much better suited for structured data. Production data isn’t a good candidate for immutability because users will probably want to modify it at some point. But there’s good news: More vendors than ever are applying immutability to more than just object storage. “It goes back to the evolution from hardware media-based WORM to software-based technology,” explained Paul Speciale, chief product officer at Scality, an object storage vendor. “Ultimately, all storage sits on top of underlying block storage, so immutability has to be enforced at the software layer managing the storage.” How Air Gapping WorksAnother way to improve data security is through air gapping, a technique that keeps a separate copy of backups disconnected from the network. The air-gapped copy is often stored at an offsite location. There are two basic types of air gaps:A physical air gap disconnects the backup from the network after it is written, then reconnects the network. A logical air gap sends backups to a physically separate location. Backups, however, aren’t completely disconnected from the network. The backup software does the heavy lifting, preventing the backups from being overwritten or deleted. An offshoot of the air-gapping method is the data vault or cybervault – an offline place that is physically and logically isolated from the production environment. Despite the effectiveness of air gapping, relatively few organizations take advantage of the technology. An Enterprise Strategy Group survey found that only 30% of organizations have deployed an air gap that separates production and backup networks.Immutability, Air Gapping, or Both?So, how should you go about incorporating at least one of these data protection methods into your technology stack and processes? According to Christophe Bertrand, a practice director at Enterprise Strategy Group, it’s both an architecture- and business-driven decision.“It depends on your objective. If it’s to strengthen or harden the backup infrastructure, you need backups that go on immutable storage of some type,” Bertrand said. “If you need archives for compliance purposes that you have to demonstrate can’t be adulterated, then which storage tier becomes another question. If you have to keep data for 30 years, you don’t want to put it on expensive disks.”In addition to determining the best architectural option, there are, of course, economic considerations to make, said Oscar Arean, a technical director at Databarracks. “In some cases, one of these options might sound like a great idea at first, but, in reality, it could really increase your backup costs,” Arean explained. “It’s about balancing the additional cost with the potential risk and figuring out what makes sense for your particular case.”If you can swing it, consider products that include both immutability and air gapping. Bertrand went as far as to say data protection isn’t complete without both, plus the right cybersecurity protections.“It’s one thing to make the data immutable so it can’t be modified, but that doesn’t mean that somebody still couldn’t access it, read it, and exfiltrate something by gaining access to some intelligence,” Bertrand said. “It’s important also to air gap some of your data and make it immutable so it’s only connected to the network when it’s backing up or making a copy.”What to Consider Before You BuyBefore blasting through your tech budget to achieve the right levels of immutability and air gapping, it makes sense to reevaluate what you already have. While it may be time for a refresh, especially if your technology is more than few years old or hardware-based, it pays to examine the features available in your existing technologies. Many vendors continue to upgrade their offerings with these features.If you opt for a replacement, do your homework, Arean said. Make sure the new technology is compatible with existing or planned technology. In addition, the replacement should allow for layers of control to manage the storage over time. Most importantly, organizations must understand that air-gapped and immutable backups and storage are the last line of defense, not the first.“This does not replace network firewalls, network protection, or application protection,” Speciale stressed. “It’s just part of the stack and should be considered the last line of defense.”About the authorKaren D. Schwartz is a technology and business writer with more than 20 years of experience. She has written on a broad range of technology topics for publications including CIO, InformationWeek, GCN, FCW, FedTech, BizTech, eWeek and Government Executive.
February 20, 2023
GoDaddy reveals three years of ongoing attacks • The Register
In brief Web hosting and domain name concern GoDaddy has disclosed a fresh attack on its infrastructure, and concluded that it is one of a series of linked incidents dating back to 2020.The business took the unusual step of detailing the attacks in its Form 10-K – the formal annual report listed entities are required to file in the US.The filing details a March 2020 attack that "compromised the hosting login credentials of approximately 28,000 hosting customers to their hosting accounts as well as the login credentials of a small number of our personnel" and a November 2021 breach of its hosted WordPress service. The latest attack came in December 2022, when boffins detected "an unauthorized third party gained access to and installed malware on our cPanel hosting servers," the filing states. "The malware intermittently redirected random customer websites to malicious sites." GoDaddy is unsure of the root cause of the incident, but believes it could be the result of "a multi-year campaign by a sophisticated threat actor group that, among other things, installed malware on our systems and obtained pieces of code related to some services within GoDaddy.""To date, these incidents as well as other cyber threats and attacks have not resulted in any material adverse impact to our business or operations," the filing states – showing enormous empathy for customers whose sites were redirected in the most recent attack, or impacted by the earlier incidents. In a brief statement on the incident, GoDaddy hypothesized that the goal of the December 2022 attacks "is to infect websites and servers with malware for phishing campaigns, malware distribution and other malicious activities." – Simon SharwoodMoscow considers legalizing hacking – but only for the glory of Mother RussiaThe Russian government is working on changes to its criminal code that would legalize hacking in the Federation – provided it's being done in the service of Russian interests, of course. According to Russian news service TASS, Alexander Khinshtein, head of the state Duma committee on information policy, wants exemptions from liability given to hackers, but aside from tossing the idea out to reporters he didn't have details to add. Still, Khinshtein argued, "I am firmly convinced that it is necessary to use any resources to effectively fight the enemy," adding that Russia needs to be able to respond adequately to any threat – and who better to help than a well-established army of hackers? Russian-linked hacking groups are notorious for the damage caused – or attempted – by groups like Killnet, Cozy Bear, Vice Society or any of the myriad others linked to attacks on its enemies – both in Ukraine and elsewhere.  Those groups may operate with a certain amount of impunity within Russia, but the law still isn't on their side, as TASS pointed out. Russian laws regarding cyber crimes are strict – if not always enforced – and exceptions are reportedly nonexistent. Two sets of laws pertain to hacking activity: Articles 272 and 273 of the Criminal Code of the Russian Federation, which cover illegal access and the creation, distribution and use of malicious computer software, respectively. Gaining illegal access and/or using malicious software, if it leads to "grave consequences or [the creation of] a threat," can earn a Russian up to seven years in prison, with lesser possible terms for less damage or acting independently of a group.Adding exceptions for what TASS described as "white hat" operations in the interest of the Russian government would provide considerable leeway for state-sponsored hackers already doing so.More alarming, however, is the encouragement it would give to green hats more likely to break a system than break into it, script kiddies in it for the lulz, and dark web turnkey crooks. There's no indication such a law is on the way to passage – Khinshtein said it still needed to be spoken about "in more detail" – but it might be a good idea to reinforce that security posture. Especially if you're in a critical industry.Critical vulnerabilities of the weekWe're still hot on the heels of February's rather romantic Patch Tuesday, so if you're wondering where a few well-publicized vulnerabilities are in this list – we may have already covered them. That said, there's still plenty of patching fun to be had if you're not sick of it already. CVSS 10.0 – CVE-2023-24482: Siemens COMOS plant engineering software contains a buffer overflow vulnerability that could allow a remote attacker to execute arbitrary code and cause a denial of service; CVSS 9.8 – CVE-2022-1343: Siemens Brownfield Connectivity Client contains several vulnerabilities able to cause a denial-of-service condition;CVSS 9.8 – CVE-2022-46169: Open source operational monitoring and fault management software Cacti contains a command injection vulnerability which is not new, but CISA said it has recently spotted being exploited in the wild, so patch now;CVSS 9.8 – CVE-2022-39952: FortiNAC web server may allow an unauthenticated attacker to perform an arbitrary write due to an external control of file name path vulnerability (now patched);CVSS 9.3 – CVE-2021-42756: FortiWeb's proxy daemon has multiple stack-based buffer overflow vulnerabilities that can allow an unauthenticated attacker to achieve arbitrary code execution. Mozilla's Firefox 110, Firefox ESR 102.8 and Thunderbird 102.8 were also released this week, and addressed a total of eight CVEs shared by a mix of the three products. As Mozilla's bug reports are restricted and it doesn't provide actual CVSS scores, we've selected bugs it rates as high priority, defined as those that can be used to gather sensitive data and "requiring no more than normal browsing actions." None of the bugs Mozilla patched in this release were considered critical. CVE-2023-0767: Maliciously-crafted PKCS 12 files can be used to trigger arbitrary memory writes;CVE-2023-25728: the Content-Security-Policy-Report-Only header can be abused to leak child iframe unredacted URI;CVE-2023-25730: Requesting fullscreen mode and then blocking the main thread can force Firefox into fullscreen mode indefinitely, allowing confusion or spoofing attacks;CVE-2023-25735: Firefox's Spidermonkey JavaScript engine has a use-after-free bug due to a compartment mismatch;CVE-2023-25737: An invalid downcast from nsTextNode to SVGElement can cause undefined behavior;CVE-2023-25738: Firefox on Windows is experiencing problems whereby printing is crashing device drivers;CVE-2023-25739: Failed module load requests aren't being checked, leading to user-after-free vulnerabilities in ScriptLoadContext;CVE-2023-25743: Firefox Focus doesn't include a notification for entering fullscreen mode, which could allow malicious website spoofing.CVE-2023-25743: Firefox Focus doesn't include a notification for entering fullscreen mode, which could allow malicious website spoofing.Finally, CVE-2023-24809 won't keep anyone up at night, unless they are avid players of the venerable Rogue-like adventure game NetHack. The 5.5-rated flaw is found in versions 3.6.2 through to 3.6.6 and means illegal input to the "C" (call) command can cause a buffer overflow and crash the NetHack process. "This vulnerability may be a security issue for systems that have NetHack installed suid/sgid and for shared systems", an advisory warns. Upgrading to version 3.6.7 solves the problem. No save-scumming, people!Emergency declared in Oakland, CA after ransomware attackOakland, California declared a state of emergency on Valentine's Day – and not because there was too much love in the air. A week of work hasn't done a whole lot to clear up a ransomware attack that hit the city on February 8.As we reported in last week's security roundup, the attack didn't take down 911 services, disrupt finances or worsen emergency response times, but the precaution of taking a good portion of the city's network offline to stop the attack has led to a slow recovery and some non-emergency systems inaccessible. "The network outage has impacted many non-emergency systems including our ability to collect payments, process reports, and issue permits and licenses," the city declared in an update on February 15, adding that residents should call before showing up at a city office in case it's closed. The Oakland government said that police and fire departments are still responding to emergency calls as usual, but that non-emergency requests should be made online or reported by a call to the local 311 non-emergency line. By declaring a state of emergency, Oakland has expedited its ability to procure equipment and materials to respond to the ransomware attack, as well as activating emergency workers and making it easier for leadership to issue orders. The Oakland city government said the attack investigation is ongoing, and law enforcement is investigating. The city hasn't said how the attack occurred, who was behind it or what sort of ransom demand was made. ®
February 20, 2023
HardBit ransomware wants insurance details to set the perfect price - Bleeping Computer
A ransomware threat called HardBit has moved to version 2.0 and its operators are trying to negotiate a ransom payment that would be covered by the victim's insurance company.Specifically, the threat actor tries to convince the victim that it is in their interest to disclose all insurance details so they can adjust their demands so the insurer would cover all costs.Emergence of HardBit 2.0The first version of HardBit was observed in October 2022, while version 2.0 was introduced in November 2022 and it is still the currently circulating variant, according to a report from Varonis, a data security and analytics company.Unlike most ransomware operations, HardBit does not feature a data leak site, although its operators claim to steal victim data and threaten to leak it unless a ransom is paid.As a ransomware strain, HardBit 2.0 features some capabilities to lower the victim's security, like modifying the Registry to disable Windows Defender's real-time behavioral monitoring, process scanning, and on-access file protections.The malware also targets 86 processes for termination, to make sensitive files available for encryption. It establish persistence by adding itself to the "Startup" folder, and deletes the Volume Shadow copies to make data recovery more difficult.An interesting element concerning the encryption phase is that instead of writing encrypted data to file copies and deleting the originals like many strains do, HardBit 2.0 opens the files and overwrites their content with encrypted data.This approach makes it harder for experts to recover the original files and makes the encryption slightly faster.Ransom negotiationLike other ransomware strains, the note that HardBit 2.0 drops on the victim's system does not inform of the amount the hackers want in exchange for the decryption key. Victims get 48 hours to contact the attacker over an open-source encrypted peer-to-peer communications messaging app.HardBit 2.0 ransom note (Varonis)The threat actor advises the victims not to work with intermediaries, since this would only drive up the total cost, but to contact them directly for negotiations.For companies that have insurance for cyberattacks, the hackers have a more elaborate set of instructions and urge them to disclose the insurance amount for successful dialogue.Even more, the hackers make it look like sharing the insurance details is beneficial to the victim, painting the insurer as the bad guy that stands in the way of recovering their data.The threat actors say that insurers never negotiate with ransomware actors with their client's interests in mind, so they make ludicrous counter-offers to their demands just to derail the negotiations and refuse to pay."To avoid all this and get the money on the insurance, be sure to inform us anonymously about the availability and terms of the insurance coverage, it benefits both you and us, but it does not benefit the insurance company," HardBit operators say in a note to victims.Instructions for insurance holders (Varonis)The attackers say that if they know the exact insurance amount, they would know exactly how much to ask so the insurer is forced to cover the demand.Of course, victims are also typically contractually limited not to disclose insurance details to the attackers, and doing so risks losing any chance of the insurer covering the damages. This is why the hackers insist on these details to be shared privately.Regardless of their offer, ransomware operators' goal is to get paid and they would say anything to get the money. The reality is that they cannot be trusted.Refusing to pay the ransom and reporting the incident to law enforcement along with having a consistent backup strategy are the only ways to fight this type of threat and bring it to an end.The report from Varonis provides technical details on how HardBit 2.0 works starting from the initial stage and disabling security features to gaining persistence and deploying the encryption routine. The researchers have also shared indicators of compromise (IoCs) that help identify the threat.
February 21, 2023
Complexity, volume of cyber attacks lead to burnout in security teams
The rapid evolution of cybercrime is weighing on security teams substantially more than it did last year, leading to widespread burnout and potential regulatory risk, according to Magnet Forensics. “Digital forensics and incident response teams have proven to be indispensable to combat cybercriminals but the complexity and volume of attacks and the dearth of talent available to address them is leading to unprecedented burnout,” said Adam Belsher, CEO of Magnet Forensics. The annual Magnet Forensics survey polled 492 digital forensics and incident response (DFIR) decision-makers and practitioners predominately located in North America, Europe, the Middle East and Africa. Its respondents described the current cybercrime landscape as evolving beyond ransomware and taking a toll on their investigation ability.Growing incident waves overwhelm DFIR teams40% of respondents described the evolution of cyberattack techniques as a “large” or “extreme” problem impacting their investigations. This represents a 50% increase from the 2022 State of Enterprise DFIR report.Business email compromise is on the rise and is now occurring more frequently than ransomware, the most common security threat in last year’s report. The highest number of respondents — 14% — said they encounter it “very frequently.”Business email compromise attacks are the most likely to require third-party resources to assist with the investigation, according to 50% of respondents.It’s taking security teams too long to get to the root cause of these evolving attacks. 43% said it takes them between one week and more than a month. About 1 in 3 respondents said that identifying the root cause requires either a “complete overhaul” or “major improvements.”With cybercriminals intensifying their efforts, DFIR teams now find themselves investigating waves of incidents that are growing in volume and complexity. According to 45% of respondents, the surge in investigations and the data associated to them is either a “large” or “extreme” problem for their organizations. Unable to handle this data alone, nearly two-thirds look to third parties for help. A global talent shortage, one that’s left more than 755,000 unfilled cyber jobs in the U.S. alone, isn’t helping matters, according to the respondents. Nearly 1 in 3 say that recruiting and hiring new DFIR professionals for a security team is a challenge. Each of these factors is contributing to their burnout and leading them to seek out alternate solutions like automation.Alert and investigation fatigue is likely playing a role in burnout54% of the respondents said they were feeling burned out in their jobs.Alert and investigation fatigue is likely playing a role in burnout as 64% of respondents said it is a “real issue.”Today’s investigative workflows are being slowed down by a reliance on repetitive tasks and tools that aren’t interoperable. The same percentage of respondents — 37% — described both as either a “large” or “extreme” problem.Their workload may be contributing to exposing their organizations to regulatory risk. 46% said they just don’t have the time to understand new cybersecurity regulations.The respondents see automation as the solution. 50% said automation would be “extremely valuable” or “highly valuable” for several DFIR tasks, including the remote acquisition of target endpoints and the processing of digital evidence.