Here is a simple sampling of data. Think carefully before providing an answer or a diagram.

I cannot yet tell you who showed me this because that may help give away the answer. Next week I will tell you the correct way to model these points and the name of the person who I met recently that discussed this and many other incorrectly modeled techniques we may have been using for years.
Here is the proper solution from Dr. Colin McAndrew:


This figure could be nonlinear equation which is y = f(x). Each components of f(x) could be obtained using polynomial interpolation. Then, using simulation SW tool such as Mathlab with trial and error, the equation could be estimated. More sampling data predicts more accurate equation.
@DaeJ—Good analysis, but think about the individual sampling points.
Without some sort of information on the process being measured, I don't see that anything but a linear approximation (least-squares fit) would be be appropriate. If there were a plausible model of the system, then some other function could be used, but it is entirely possible to overinterpret the data, if, for eample an Nth-order poly were applied: yes, you could “hit” every point, but intermediate values would be pretty suspect. See “Runge's Phenomenon.”
Looks a little bit like a power MOSFET gate charge measurement. But, if so, it's very badly measured. Hope it's something else!
Agree with GSKrasle… reminds me of this cartoon
Random Signal Process #101: In the modeling process, there is always the error between predicted and real value. Engineer will judge how much error will be allowed when designing the process of black box, depending on application. Process would be either deterministic or nondeterministic process. Deterministic process is that future value can be predicted from the past value.
@onthejob: I think there is something wrong and that is why the measurement was wrong.
@chirshadblog – really doubt this plot is what I suggested. But this brings up a topic I have a little experience with, i.e., curve fit models for devices/circuits. While most engineers are naturally reluctant to use such models, a good curve fit model THAT INCORPORATES SOME COMPONENT UNDERSTANDING can actually be very successful – if an appropriate “basis function” is used. A prime example, the exponential model for a diode's current vs. voltage function. Simple polynomial functions are usually NOT very good because they “blow up” at large extremes and can cause simulators to fail (by the way, the exponential diode model should always include a series resistance to prevent this).
@Daej, all models are imperfect abstraction of reality and that precise input data are rarely if ever available, all output values are subject to imprecision. Our models are simplification of the real systems we study, furthermore we cannot predict the future with precision, so we know the model outputs are at best uncertain.
I agree, it seems the trend of the gate charge of a power MOSFET vs the voltage between GATE and SOURCE terminals. The accuracy and resolution of the measurement instrument is very important in this type of measurements, the best choice of the interpolation curve depends on the accuracy of the data measurement.
I should have said that my experience with gate charge measurement is old and my understanding/modeling of power MOSFETs is also very old. But probably still relevant.
My comments re curve fit models were mostly aimed at MOSFET capacitance models, and NOT so much to the curve in question (although it may apply there also). My ancient MOSFET models accurately predicted gate charge results (please see “An Accurate Model For Power DMOSFET's Including Interelectrode Capacitances,” Scott, R. S.; Franz, G. A.; Johnson, J. L., IEEE Transactions on Power Electronics, vol. 6, no. 2, April 1991″.
Being older myself, your question kind of reminds me of “2001: A Space Odyssey”. No one understood that movie when it came out (or yet)! Look forward to your revelation next week. Sometimes a good mystery is more enjoyable than an obvious fact!
I am a little confused by the subject here, because I think of “modeling” as referring to a theoretical or hypothetical representation of something. But what the plot shows is apparently measured data points. So where is the model, where was the modeling? That is absent. This is a question about data analysis only, not about modeling.
Good analysis of measured data depends on having a reasonable model, understanding of the process, or expectation, and those are things we know absolutely nothing about.
Therefore, my answer to the question would be, “Bzzt. You don't interpolate that data. It's not possible.” You cannot analyze data pulled out of a hat, other than to look at it and decide whether it looks pretty. Any assumption you make could be incorrect, so a careful person would refrain from doing that.
Dr. Colin McAndrew exposed “bloopers” in engineering analyses recently at an IEEE meeting at Freescale Semiconductor in Tempe, AZ. This was one of them.
See my just-added images at the end of this article for the proper solution by Dr. McAndrew.
@Steve, In the proper solution diagram you have added margins to the data points but I am not sure how will this help us to linearize it ?
Our models are simplification of the real systems we study, furthermore we cannot predict the future with precision, so we know the model outputs are at best uncertain.
@Netcrawl, very true. I think we cannot rely on these models totally because they are not perfect they just gives us insight into it.
@Sinita T0, the margins signify noise level uncertainty at the data point. We average out the random noise in order to draw a linear approximation curve through our data points instead of “connecting each of the dots” approach to each data point and trying to approximate with a high order differential equation
@SunitaTO models are the primary way we have to estimate the multiple effects of alternative design or systems, models predict the values of various system performance indicators, their outputs are based on model structure and other time-series of inputs and host of parameters whose values describe the system being simulated.
In some cases, data analysis might need the modeling to predict the future value in the time domain. In other case, for example, if engineer looks the saturation point in terms of temperature for any component, they need to execute data analysis. I think that this case might require some type of interpolation to find the equation in my narrow view.