Skip navigation
All Places > Keysight Blogs > Insights Unlocked > Blog > Authors Jeff Harris

Insights Unlocked

4 Posts authored by: Jeff Harris Employee

Cryptocurrency news roared in like a lion in 2018. As 2017 came to a close, Bitcoin shares briefly peaked over $19,000 USD. It got me to thinking, a lot.

 

I made a mental note: I should have bought BTC a year earlier at the bargain price of $997 a coin. I also made a note to research how Bitcoin and other blockchain technologies would impact networks and businesses in 2018. This blog series is the result of that research.

 

Underneath the success of Bitcoin and Monero is a fundamental technology shift, called Blockchain. Understanding why bitcoins didn’t implode in the first year requires understanding how digital ledgers work. If you want that understanding, this primer is for you. Starting where it all began, out of necessity, blockchain became a method for allowing a new cryptocurrency to emerge and evolve, all designed to avoid middlemen.

 

In an article called A Brief History of Blockchain, Harvard Business Review calls blockchain a “quiet revolution.” Bitcoin and the underlying digital ledger technology that made it possible were introduced as an alternative to government-backed currencies nearly a decade ago. Today, cryptocurrency transaction volume is over $1B a day.

Blockchain is what is called a digital, or distributed ledger, essentially acting as a distributed database with no centralized data storage. Bitcoin was the first and most popular application of blockchain technology, though it is gaining momentum in a lot of other business applications. That underlying technology allows Bitcoin to be decentralized and fully transparent – this is one of the fundamental principles of the currency. Any person can trace the history of transactions through the blockchain at any time.

 

Bitcoin client discovery resembled earlier peer-to-peer protocols like BitTorrent, but the similarities ended there. Bitcoin does not need much bandwidth, and while the client accepted connections on port 8333, the client could participate in a limited fashion even without inbound connectivity. A one-time blockchain transferred several gigabytes of data, but subsequent activities were only small flows of a few hundred kilobytes.

 

Transactions are crowd processed on the Internet, where individuals can opt into processing individual blockchain transactions in exchange for bitcoin compensation. Those transaction processors contribute their computer’s CPU power, bandwidth, and electricity, and when they deliver an answer, they earn their own Bitcoins. We can call that the processing fee. The “work” they are providing is validating transactions, ultimately creating security by computing complex math problems. And what is the reward? In 2010, the value of the coins was negligible; Laszlo Hanyecz famously exchanged 10,000 of them for two Papa John’s pizzas, pegging their value at roughly a quarter-cent each. I am sure he wished he kept those.

 

The Value of Bitcoin

In 2010, a blockchain processor, often called a cryptominer, could earn 50 BTC for solving the complex problem. That reward was worth about 12 cents. Today, one Bitcoin is worth thousands of US dollars, and the solution rewards 12.5 BTC, or $87,500 given a target price of $7,000 per coin. You can always look up today’s bitcoin value. But how can a virtual currency gain value? It gains value because of trust, and that trust comes from the transparency and security of blockchain.

 

Compute Power Required

It is a race to the prize. Finding the solution to win the block prize of 12.5 BTC is approximately 3.5 trillion times harder than it was in 2010. Bitcoin, and all crypto-currencies like it, has a built-in mechanism that makes mining harder as more resources are thrown at it. This keeps the supply of new coins constant, while incentivizing further investment in coin mining hardware. Difficulty will continue to increase as long as miners can sell a coin for more than the cost of finding a coin.

 

Bitcoin’s proof-of-work is based on the SHA256 algorithm, a hashing function commonly used in security protocols. SHA256 can be massively optimized, and this has been exploited in the Bitcoin community. Miners turned first to GPUs, then FPGAs (field programmable gate arrays), and finally ASICs (application specific integrated circuits), seeking ever-higher performance.

 

Today, an ASIC miner can deliver 140,000 times the performance of a desktop PC (see table below on relative Bitcoin mining performance). Even pooling thousands of PCs proved inefficient vs. a single ASIC miner, thus nearly all Bitcoin mining is done via ASIC-miners today. However, other currencies are growing in popularity, and some are highly suited to distributed mining efforts.

 

ProcessorBitcoin SHA256 hashing performance (in millions of hashes per second)Cost
Intel i7 CPU system100 MH/sec$300
Nvidia GTX 1080Ti GPU1,000 MH/sec$400
Antminer S9 ASIC miner14,000,000 MH/sec$2,320

 

Distributed Mining

One of the biggest changes in the blockchain world is pooling of resources. As the work function to process a blockchain transaction grows more difficult, it becomes harder to find the next solution. To increase the odds of quickly finding the next solution, creative miners have turned to load balancing. Cybercurrency work functions can be easily distributed across large numbers of clients, a process called “pooling.”

 

Pool operators incentivize individuals to join a pool in exchange for a share of the profits. As mentioned earlier, pooling workstations is ineffective for Bitcoins due to the sheer performance advantage of ASIC miners. However, other blockchain applications utilize different proof-of-work algorithms, and some of these can provide very competitive returns when mined in pools.

 

Of course, whenever there are transactions being conducted in what are considered public forums, such as the internet, there will always be those looking to exploit system vulnerabilities for personal profit. Because of this, continuous active monitoring of your network is vital.

 

The Simple Equation

As of June, 2018, 0.32% of the world’s electricity is used to process bitcoin transactions at a cost of nearly $3B a year. In April, that number was 0.27%. That’s a big jump. This marks a shift in how transaction processing is being done. It also marks a shift in how distributed processing is changing the landscape of enterprises as they adopt blockchain for everything from B2B purchase orders to international corporate funds transfers to supply chain management. We can learn a lot from cryptocurrencies as they are innovating blockchain processing in new and exciting ways.

 

In the next chapter, we will look more closely at how these currencies work, and the reasons why new currencies rise and become viable. This is important when considering how to craft corporate policies not just for cryptocurrency, but using blockchain operations in your business and on your network. We may also explore threat vectors bad actors use to compromise systems, and how a network visibility architecture can ensure you have only legitimate transactions within your business network.

When Thomas J. Watson, Jr., the former president of IBM said in a 1973 lecture that “good design is good business,” he was acknowledging the impact design had made on IBM’s fortunes during the 1950s and 60s. In those decades, Watson Jr. had overseen the company’s transition from making punch-card systems to new electronic computers. It was a big shift on multiple levels. He also put in place an overarching ‘design thinking’ program which spanned everything from IBM’s products, to its buildings, marketing and logos.

 

Those changes formed the foundation of IBM’s future in the computer market, eventually leading to the popular phrase customers would utter, "You’ll never get fired for buying IBM." Becoming a trusted brand has always required building products with thoughtful, integrated features and high reliability. Rapid hardware development, spiral design cycles, and the introduction of continuous software services forever extend the relationship between product developer and product user. End-of-life electronic components, processor and memory upgrades, new interfaces and features, and bug fixes continue the be part of the development process long after the design is complete, the prototype has been validated, and the device has been produced.

 

Trusted brands deliver products with performance, reliability, and safety that users demand. Getting there requires in-depth testing and benchmarking in the widest possible range of environments and use cases. But another key element has emerged in the product development lifecycle – the need to integrate services as part of the product offering. The more features and uses your product has, the more customers benefit from deeper interactions beyond the sale. Learning, exploring, and using are all more engrained with integrated offerings.

 

Knowing when a microsecond matters

We are fortunate to support a lot of very important and exciting customer product developments. One area, automotive, is currently undergoing a renaissance: integrating more eMobility, more autonomy, more connectedness, and of course more electronics. There is also a lot at stake so developers need to make sure measurements are very precise.
Cars are integrating autonomous driving algorithms that rely on inputs from sensors: optical imagers, LIDAR (light detection and ranging) and radar systems, to name a few. For new automotive millimeter wave (mmWave) radars operating at 77 GHz, testing requires a precise set-up so we had to develop a complete service offering – measurement equipment, chamber guidance, special data capture software, and high precision calibration. You will see why when you see the basic test steps:

 

  1. Start with an automotive radar target simulator consisting of an anechoic chamber, signal analyzer, power meter, and target object
  2. Create a Simulink model of an automotive radar simulator
  3. Run uncertainty analysis on all sub-elements to generate test procedures and limits
  4. Load procedures into the test system
  5. Run diagnostic measurements to verify system operation
  6. Preform calibration on the test equipment
  7. Run the test

 

Throughout this collection, precision counts so the equipment alone was not enough. It needed precise tuning. We are looking for the signal, but we are also looking at the noise, interference, temperature variation, environmental variation – they all make a difference in the result. And why? Because seconds can count, milliseconds can count, and in this case, microseconds can count a lot.

 

Putting it in perspective: What’s an order of magnitude between friends?

Let’s take our setup above and pretend that we are simulating the position, or range, of a car driving in front of us. Uncertainty in time delay due to skipping the tuning steps, for instance, can cause a bias in the recorded results. The time delay measurement is very sensitive. Radars send out pulses and once they encounter something solid, some of the signal bounces back. The more solid the object, the larger the reflected signal. Range is determined by measuring the pulse delay.

 

If, in our example, the car in front of us is 100 meters away, and the setup has as small as 1 microsecond of uncertainty, that translates into a potential 30-meter error in estimating its real range. Not good.

 

Build a business that solves problems and helps your customers win

Whether your innovation is in automotive technology, 5G, IoT, cloud, aerospace and defense, or any other modern market, the move to products integrated with services is part of your world. It is necessary for a better customer experience. Developers expect, and need, complete solutions that solve a problem. The trusted brands of tomorrow will be the ones who successfully help their customers win in the market. We intend to be at the head of that line so that customers may someday utter: “You’ll never get fired for buying Keysight. Those guys solve problems.”

 

Find out more about how we’ve helped enterprises solve their design problems here.

Jeff Harris

Data Centers on Wheels

Posted by Jeff Harris Employee Mar 28, 2018

As vehicles pack in more and more technology, such as remote diagnostics, on-board GPS, collision avoidance systems, Wi-Fi hotspots for connectivity, cameras and yes, autonomous driving systems, it’s no surprise that they are increasingly being described as ‘data centers on wheels.’

 

Interconnecting all these complex systems involves wiring harnesses, connections, and, of course, data exchange processing. Sensors need to provide feedback to mechanical systems, and as more autonomy is built into vehicles, decision algorithms require central processing in between, often integrating inputs from external data sources. Some estimates state that the use of current, proprietary wiring harnesses in a typical car consumes up to 50% of the labor cost that goes into building it. This conventional wiring can also weigh upwards of 200 pounds (90kg), and each connection requires functional testing.

 

To solve these problems, auto manufacturers are turning to data center technology with the use of automotive Ethernet. This enables a single standard for wiring and cabling across all manufacturers, and can realize significant savings in production-line time and weight by simplifying connectivity.

 

Replacing complex and heavy proprietary wiring in current vehicles with simpler connections based around fast Ethernet also delivers further benefits. Today’s CAN (Controller Area Network) based wiring operates at just one megabit per second. But there are many functions in a vehicle competing for bandwidth — including Bluetooth, Wi-Fi, infotainment systems, reversing cameras, automatic braking systems, hybrid powertrain systems and autonomous driving systems interconnecting dozens of sensors, processors and electromechanical system controls. Automotive Ethernet, offering 100 MB or even gigabit speeds, provides manufacturers with a proven method of interconnecting all of these subsystems, while ensuring that everything gets the bandwidth and ultra-low latency it demands.

 

Then comes the transition, as manufacturers migrate their vehicles from existing, proprietary CAN technology to automotive Ethernet. This requires a host of new testing approaches.

 

Putting automotive Ethernet to the test

We helped one of the largest automobile manufacturers in China make this transition. The company planned to upgrade to automotive Ethernet in its vehicles to reduce manufacturing costs and improve owners’ experience by enabling advanced data communications, remote diagnostic and software upgrade capabilities. They were using an older CAN system for their vehicle system bus, which meant using older electronic control units (ECUs) from several external vendors that identified them as end-of-life. The manufacturer had to upgrade, and we helped them move to Ethernet, to connect the cars’ digital instruments, driving data recorder, ECU, and infotainment switches. It turned out that the testing was not only more modern and comprehensive, but was also faster.

 

The manufacturer just needed to be sure the upgrade would work seamlessly before overhauling their manufacturing processes. Defining the new automotive Ethernet network required validation of the new physical layer, protocol conformance and automotive application performance. We ended up demonstrating a series of automated test scripts that provided so much test coverage, they were able to see how they could easily extend it across:

 

• physical layer validation for the 100 Base T1 Ethernet standard
• multiple physical layers validation within an ECU
• conformance validation to the various protocols used in the ECU
• network deployment validation and debugging using automotive data loggers

 

This was a great example to show the power of combining Keysight’s Layer-1 capabilities with Ixia, a Keysight business’ understanding of Layers 2-7. We demonstrated how to validate the physical test layer connection at every point, and follow that up with a full suite of Layer 2 through 7 conformance and performance testing. This was the only way to prove each new automotive Ethernet connection worked and properly interconnected every point within the car’s network.

 

The result enabled this car manufacturer to build a single integrated testbed (Keysight’s Oscilloscope, and IxANVL and IxNetwork from our Ixia business) to get to their desired test coverage and offer remote programming interfaces to achieve automation across testing tools.

 

By replacing an older, more manual test process with a set of modern, automated tests, testing time shrunk from two days down to one day. This is a great example of how upgrading can not only provide better performance, but also save time and money through greater efficiency. You can read the full Automotive Ethernet Case Study and find out more about Keysight’s automotive Ethernet solutions.

Having a strong perimeter for protection has been a core security strategy for centuries, and it’s still the basic foundation for network security today. Enterprises traditionally focus most of their security efforts on stopping unauthorized access and threats at the network border, to protect the applications and sensitive data within.

 

However, that network border is no longer a solid defensive barrier. It’s getting increasingly stretched and fragmented as organizations migrate their applications and infrastructure to the cloud. In its 2017 State of the Hybrid Cloud report, Microsoft found that 63% of enterprises are already using hybrid cloud environments.

 

The result is that gaps are appearing in perimeter defenses, which can be exploited by hackers or malware to steal information and IP. The data breach at credit reference agency Equifax, which exposed the records of 143 million U.S. customers in September 2017, was caused by hackers exploiting a simple vulnerability in a web application. That same month, Verizon, Time Warner Cable and Deloitte all suffered breaches from poorly configured Amazon S3 buckets.

 

These issues are forcing enterprises to rethink their approaches to data security. They’re starting to focus less on perimeter defenses, and more on identifying unusual user or network behavior which may be an early sign of a potential breach or attack. Another driver behind this rethink is the EU General Data Protection Regulation (GDPR), which comes into effect on 25th May 2018.

 

GDPR will force organizations to take greater responsibility over how they secure the personal data they hold, or risk significant penalties if they have a breach. Yet a recent analysis by Forrester found that only 25 percent of organizations are currently GDPR compliant, while just 22 percent expect to be compliant by the end of 2018.

 

Given the need to urgently address these challenges, what actions should enterprises take? Security Week recently published our article describing how organizations can rethink their security strategies to resolve these issues before they get out of control. Here’s a recap of the five key steps it describes:

 

1. Assign roles specific to new threats

Data security is a priority, so don’t spread responsibility for it across your IT department or add it to an existing manager’s workload. Putting a single person or team in charge ensures that it will get the attention it needs.

 

2. Audit data and infrastructure immediately

Enterprises need to know exactly what data they are dealing with, what policies need to be attached to each type of data, who has access to that data, and where workloads accessing critical data are running. This requires in-depth visibility across the entire enterprise network environment. It is also important to document data capture methods for compliance. An initial audit, and ongoing asset discovery is essential to identify what is vulnerable and where, so action can be taken to close the gaps.

 

3. Create baselines
Once the enterprise understands its data profiles, and who should have access to which type of data, this can be turned in to baselines of expected, normal behavior.

 

4. Monitor for abnormalities
Enterprises then need to monitor user and network behavior against these baselines to identify anomalies which could signal a potential breach. Examples are a user downloading terabytes of data, or an employee with marketing credentials accessing server logs.

 

5. Ensure security data is also secured
Enterprise security teams also need to secure their own processes. Personally-identifiable information (PII), included in everything from vlogs to personnel data, needs to be secured through data masking to ensure security itself is not the weak link.

 

In conclusion, security strategies focused solely on perimeter defenses are no longer capable of protecting sensitive data against theft and inadvertent leakage in today’s complex IT environments. Organizations need to be able to quickly identify threats and vulnerabilities inside their networks, to keep PII safe. After all, if they can’t see what’s happening, they have no control over it.