Developers and the Fear of Applefrom the think-different-except-about-us dept.

An anonymous reader writes: UI designer Eli Schiff has posted an article about the “climate of fear” surrounding Apple in the software development community. He points out how developers who express criticism in an informal setting often recant when their words are being recorded, and how even moderate public criticism is often prefaced by flattery and endorsements. 

Beyond that, the industry has learned that they can’t rely on Apple’s walled garden to make a profit. The opaque app review process, the race to the bottom on pricing, and Apple’s resistance to curation of the App Store are driving “independent app developers into larger organizations and venture-backed startups.” Apple is also known to cut contact with developers if they release for Android first. The “climate of fear” even affects journalists, who face not only stonewalling from Apple after negative reporting, but also a brigade of Apple fans and even other journalists trying to paint them as anti-Apple.

10 Ways To Measure IT Security Program Effectiveness

original article

The right metrics can make or break a security program (or a budget meeting).

As CISOs try to find ways to prove ROI to higher ups and improve the overall effectiveness of security operations, the right metrics can make or break their efforts. Fortunately, infosec as an industry has matured to the point where many enterprising security leaders have found innovative and concrete measures to track performance and drive toward continual improvement. Dark Reading recently surveyed security practitioners and pundits to find out the best time-tested metrics to prove security effectiveness, ask for greater investment, and push security staff to improve their day-to-day work.

Average Time To Detect And Respond

Also referred to as mean time to know (MTTK), the average time to detect (ATD) measures the delta between an issue occurring—be it a compromise or a configuration gone wonky—and the security team figuring out there’s a problem. 

“By reducing ATD, Security Operations Center (SOC) personnel give themselves more time to assess the situation and decide upon the best course of action that will enable the enterprise to accomplish its mission while preventing damage to enterprise assets,” says Greg Boison, director of cyber and homeland security at Lockheed Martin.

Meanwhile, the mean time to resolution or average time to respond, will measure how long it takes for the security team to appropriately respond to an issue and mitigate its risk.

“Average Time to Respond (ATTR) is a metric that tells SOC management and personnel whether or not they are meeting objectives to quickly and correctly respond to identified violations of the security policy,” Boison says. “By reducing ATR, SOC personnel reduce the impact (including the cost) of security violations.”

Tracking these two metrics continuously over time can show how well a security program is improving or deteriorating. Ideally they should be growing smaller over time.

False Positive Reporting 

Tracking the False Positive Reporting Rate (FPRR) can help put the work of lower-level analysts under the microscope, making sure that the judgments they’re making on automatically filtered security event data is sifting out false positives from indicators of compromise before they escalate to others in the response team.

“Despite the implementation of automated filtering, the SOC team must make the final determination as to whether the events they are alerted to are real threats,” Boison of Lockheed Martin says. “The reporting of false positives to incident handlers and higher-level management increases their already heavy workload and, if excessive, can de-motivate and cause decreased vigilance.”

A high FPRR could indicate better training is needed from Level 1 Analysts or better tuning of analytics tools.

“All too often Level 1 analysts lack a good understanding and visibility to incidents cause and therefore escalate false alerts to Level 3 analysts,” says Lior Div, CEO of Cyberreason. “This causes waste of expensive resources.”

Mean Time To Fix Software Vulnerabilities

Whether for web, mobile, cloud-based, or internal applications, organizations that build custom software should be measuring how long it takes to remediate software vulnerabilities from the time they’re identified, says John Dickson, principal at Denim Group. 

“This measurement helps organizations understand the window of vulnerability in production software,” Dickson says. “Unfortunately, most organizations do not publish this metric internally and as a result, the most serious application vulnerabilities, like SQL injections, remain in production far too long.”

Realistically, this number may be skewed by fixes that don’t ever occur, particularly during the development process. Which is why organizations should also be tracking the number of critical defects fixed against those reported, which will show how effective static analysis is for the organization, says Caroline Wong, director of security initiatives for Cigital.

“To obtain this metric, the software security group must be performing static analysis, counting the number of defects initially found — by classification, during first scan — and counting the number of (critical) defects which are actually fixed by developers,” Wong says. “The quality of the code will not actually increase until the developer performs triage on the findings and fixes the actual software defects. The desired trend for this metric is to increase towards 100 percent.”

Patch Latency

In the same vein, patch latency can also show how effective the program is in reducing risk from the low hanging fruit.

“We need to demonstrate progress in the vulnerability patch process. For many organizations with thousands of devices, this can be a daunting task. Focus on critical vulnerabilities and report patching latency,” says Scott Shedd, security practice leader for consulting firm WGM Associates. “Report what we patched what remains unpatched and how many new vulnerabilities have been identified.”

Incident Response Volume

Tracking the total number of incident response cases opened against those closed and pending will help CISOs identify how well incidents are being found and addressed. 

“This shows that incidents are being identified along with remediation and root cause analysis,” says Shedd of WGM. “This is critical for continuous improvement of an information security program.”

Fully Revealed Incidents Rate 

This metric can also help get a bead on the effectiveness of the incident response and security analyst functions within a program. 

“What is the rate of incidents handled by security team into which they have a full understanding of the reason for the alert, the circumstances causing it, its implications, and effect?” says Div of Cybereason. 

The lower the rate compared to overall volume of opened cases will show gaps in visibility and could trigger an ask for more investment in human resources or tools.

Analytic Production Time

Is your security program suffering from information overload? Measuring the time it takes to collect data compared to when it is analyzed can help answer that question.

“Reducing the analytical timeline allows IT teams to recognize and act more quickly to prevent or detect and addresses breaches, thereby improving the organizations overall security posture,” says Christopher Morgan, president of IKANOW.

“Reducing the time it takes to analyze security data, from either internal firewall or SIEM information or outside threat intelligence feeds, requires giving data scientists the tools and time to focus on data analysis,” he says.

Percent Of Projects Completed On Time And On Budget 

CISOs can show accountability by offering the CEO, board, and CFO visibility into their spending process by offering metrics on the percent of strategic IT security projects completed on time and on budget, says Dan Lohrmann, chief strategist and chief security officer at Security Mentor. 

“This could be a project on encryption, new firewalls, or whatever the top security projects are,” Lohrmann says. “This metric ensures that security is accountable for delivering ever-increasing value and improvements to the executive team.”

Percentage Of Security Incidents Detected By An Automated Control

One way to justify spend on those shiny boxes is to start tracking just how many of the overall security incidents detected by the organizations are done through an automated tool.

“This is a good one because it not only encourages you to become familiar with how incidents are detected, it also focuses you on automation, which reduces the need for ‘humans paying attention’ as a core requirement,” says Dwayne Melancon, CTO of Tripwire. “It also makes it easier to lobby for funding from the business, since you can make the case that automation reduces the cost of security while lowering the risk of harm to the business from an unnoticed incident.”

Employee Behavior Metrics

Just how effective is all of that “soft” spending on security awareness training? Steve Santorelli of Team Cymru says there are ways to track and measure that, primarily through phishing and social engineering stress testing, where you test you staff for phishing awareness and social engineering awareness.

Basically, you run a fake phishing campaign and make a few hoax calls,” says Santorelli, director of analysis and outreach for the research firm. “Reward and publicize good results, help failures to learn from their errors, and you’ll have folks actively watching out for these attacks–for a few weeks at least.”

————–

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

New Evidence Strengthens NSA Ties To Equation Group Malware

from the tax-funded-hacks dept.
An anonymous reader writes:When researchers from Kaspersky Lab presented the Equation Group espionage malware, many in the security community were convinced it was part of an NSA operation. Now, Kaspersky has released new evidence that only strengthens those suspicions. In a code sample, they found a string named BACKSNARF_AB25, which happens to be the name of a project in the NSA’s Tailored Access Operations. Further, when examining the metadata on the malware files, they found the modification timestamps were almost always consistent with an 8-5 workday in the UTC-3 or UTC-4 timezones, consistent with work based in the eastern United States. The authors also tended to work Monday through Friday, and not on the weekends, suggesting a large, organized development team. “Whereas before the sprawling Equation Drug platform was known to support 35 different modules, Kaspersky has recently unearthed evidence there are 115 separate plugins. The architecture resembles a mini operating system with kernel- and user-mode components alike.”ORIGINAL ARTICLE 

Ubuntu To Officially Switch To systemd Next Monday

from the dissenting-dachshund dept.
jones_supa writes: Ubuntu is going live with systemd, reports Martin Pitt in the ubuntu-devel-announce mailing list. Next Monday, Vivid (15.04) will be switched to boot with systemd instead of UpStart. The change concerns desktop, server, and all other current flavors. Technically, this will flip around the preferred dependency of init to systemd-sysv | upstart in package management, which will affect new installs, but not upgrades. Upgrades will be switched by adding systemd-sysv to ubuntu-standard‘s dependencies. If you want, you can manually do the change already, but it’s advisable to do an one-time boot first. Right now it is important that if you run into any trouble, file a proper bug report in Launchpad (ubuntu-bug systemd). If after some weeks it is found that there are too many or too big regressions, Ubuntu can still revert back to UpStart.