Journalists love to write about trends as they are just starting, and the joke that goes around is that when journalists count, it’s “one, two, trend.” Then they follow up with a story alerting us this hot trend they've “discovered,” supporting their contention with a few quotes, citations, and semi-arbitrary data points (if any). They also like to call out when a trend has peaked, based on sketchy information at best. Hey, it's more fun and much easier to than actually having to dig deep and cover hard news, so why not, right?
Often, this trend-sighting is initiated by someone with a biased agenda, such as having a product or service to sell, or as a joke on the public, or looking for some grants and funding (those annual hurricane forecasts and trends come to mind). There's a 2014 column by the Public Editor (ombudsman) at The New York Times on the fallibility of these so-called trend articles, sparked by one on the supposed return of the monocle, Trend-Spotting, With Wink at Mr. Peanut which explores this situation.
Engineers, of course, don't have this easy way of completing an assignment, as we deal more with data and test results, and less than anecdotes and presumptions. It’s a question engineers do have to think about: when does a “trend” start, based on the observed data? How far out can you credibly extrapolate reliable data? After all, a trend such as dissipation or even Moore's “law” can't go on forever, so where and when does it saturate, max out, or even begin to reverse? (Then there are the projections of the marketing folks, but that's a discussion for another time.)
A related problem is determining when a “peak” in the data has occurred. This is not a trivial question, whether you use classical analog peak detection (see related item, below), or a mostly digital approach, especially when the system (or users) demand that the peak be determined in real time.
The reason is that by its inherent nature, a peak can only be determined after it has occurred, rather than at the instant of occurrence. The problem is defining where to conclude that peak has occurred. Is it a drop of 1% in a reading, or perhaps 2%, 5%, or even 10%?
As in so many engineering issues, the right answer is “it depends.” On one side, you don't want that definition to be too tight; while it would be the quickest to indicate that a peak has occurred, it would often do so incorrectly due to noise, vibration, and normal variations. On the other side, if you make the definition window wider, you risk not realizing a peak has occurred until well after it has passed, which may cause system problems such as with closed-loop control strategies, Figure 1 .
For example, a yield (break) test for glass may consider a 2% drop in strain as a break, while one for a very ductile metal may require 10% (since these yield more gradually before they break). For some applications such materials testing, the decision criteria are set by standards setting bodies such as ASTM International, where the actual curves have some critical inflections, Figure 2 . But what about peak signal strength for a wireless signal (transmit or receive)?
Have you have to establishing criteria for parameters such as peaks or trends, or had difficulty agreeing on definitions due to conflicting demands and objectives?