RoadTest: Pico Technology PicoScope 2205A

It’s been a little while since my last post – one of the reasons was feeling somewhat unwell, but another reason was that I’ve been putting a mammoth effort into reviewing the Pico Technology PicoScope 2205A – a budget 25Mhz 200MS/s 8-bit dual channel USB oscilloscope with arbitrary waveform generator.

DSC_8832

This lovely compact unit was quite versatile and impressive – thanks to element14 and Pico Technology for providing it.

DSC_8840

Read the Full Review here!

About lui_gough

I'm a bit of a nut for electronics, computing, photography, radio, satellite and other technical hobbies. Click for more about me!
This entry was posted in Electronics and tagged , , , . Bookmark the permalink.

8 Responses to RoadTest: Pico Technology PicoScope 2205A

  1. drew wollin says:

    Hi. I have been though a similar learning experience with digital oscilloscopes: http://vk4zxi.blogspot.com.au/2014/03/red-pitaya-arrived-and-working.html http://blog.redpitaya.com/?p=58 http://blog.redpitaya.com/?p=58#comments. As a radio amateur, I am interested in high frequencies, VHF and UHF actually. To get a correct waveform at, say 146 MHz, the oscilloscope needs to be sampling at 10 or more times that rate to get enough data points to plot a representative image. Thus, my 150 MHz scope has a sample rate of 1000 Msps. Nyquist criteria, twice the signal rate, just tells you that a signal is there, assuming a sine wave. To compound the problem, few analogue to digital converters can sample above 250 Msps; see Analogue Devices catalogue. To get around the problem, they use time-interleaved ADCs; a number of ADCs sampling with small time offsets with outputs processed together to get 1000 or 2000 Msps. Hence there is good reason high frequency oscilloscopes are expensive. Similarly, nor can sample rates be doubled by firmware hacking, as has been popular on the net. There is surprisingly written about how digital oscilloscopes work and very hard to get a schematic to see real hardware design. Regards Drew VK4ZXI

    • lui_gough says:

      Nice to hear from you, Drew!

      I am aware of interleaving ADCs but that in itself causes problems too, as the phasing offset of the different ADCs need to be precisely controlled. If they aren’t spaced evenly and their characteristics well matched, they will cause more signal distortion in the digitized output, and might make things worse.

      Depending on what sort of waveform, at those RF frequencies, it’s pretty much SDR territory. I suppose, one could always go back to a Mixer-based front-end to downconvert the signal of interest within the range of the equipment, otherwise they’ll have to spend upwards of $10k on a very high spec spectrum analyzer!

      – Gough

      • drew wollin says:

        Hi Gough. I do read your columns and find how the seemingly simple can be complex; the theme of my other (neglected) blogs, accessible through my profile.

        Oscilloscopes do use special SOC chips with multiple ADCs and interleaving control and processing, but, as you point out, it is not without problems. These systems on a chip have allowed scopes that go to 2000 Msps to be reasonably priced, under $1000. The GHz scopes are still very expensive.

        I was thinking of mentioning spectrum analyzers in my first post, but it was already a bit long. There is a point where it is cheaper to go to a spectrum analyser, a move from the time domain with an oscilloscope, to the frequency domain with a spectrum analyzer. Usually spectrum analysers are just fancy super-het receivers that give a calibrated visual display, as Software Defined Receivers (SDR) now do. There is a fine line between SDRs and spectrum analysers now. I use a BladeRF SDR (US$420) with a bandwidth of 20 MHz to view my amateur TV, a 7 MHz wide DVB-T signal, at about 466 MHz. http://vk4zxi.blogspot.com.au/2013/11/it-lives-bladerf-sdr-on-windows-using.html

        The BladeRF SDR uses a direct conversion receiver on a single chip to cover 300 MHz to 3800 MHz. The much promised, but yet to arrive, 0 to 300 MHz transverter uses the BladeRF as an IF (intermediate frequency), effectively a super-het receiver, plus direct sampling below 30 MHz. Thus it uses all three common receiver configurations to cover 0 to 3800 MHz. The cost of a spectrum analyser to do that is mind-boggling!

        Presumably it should be possible to then do the reverse, for a small part of the spectrum, go from the frequency domain back to the time domain using software digital signal processing; effectively becoming an oscilloscope again! Maybe that is how the GHz ones do it? Shouldn’t be that hard to do in software, especially using something like GNU-Radio? Can’t say I have seen it anywhere though!?

        Regards Drew VK4ZXI

        PS Red Pitaya is an example of software-defined instrumentation vs the firmware defined instrumentation of the Pico. The line between firmware vs software defined is getting very blurred.

  2. drew wollin says:

    Hi Gough, again

    I was thinking a bit more about oscilloscopes, time vs frequency domains and SDRs. SDRs explicitly output the signal in the time-domain, but as audio, not visually like an oscilloscope.

    It should be possible to divert the sound output of the SDR to a software oscilloscope and get the visual time-domain representation of a de-modulated signal.

    Without de-modulation, the displayed signal would be the actual time domain signal that the SDR is tuned to. The simplest way to set a SDR to no de-modulation is to put it in Morse code (CW) mode.

    It gets more bizarre (my thinking). The “waterfall” of an SDR is the time-domain visualisation of the whole band.

    A SDR is already both a spectrum analyser and oscilloscope, in that it displays the frequency and time domains, respectively! Neat.

    Maybe it is just me not seeing the obvious? The time-domain trace is vertical in a SDR not horizontal like an oscilloscope.

    As such, it should be possible to use some SDR hardware to create software/firmware defined instrumentation, with the normal controls expected on a spectrum analyser and oscilloscope.

    It all gets very blurred when you can use an Android phone as a computer, oscilloscope, spectrum analyser and SDR, as well as a heap of other things!

    Regards Drew VK4ZXI

    • lui_gough says:

      Dear Drew,

      Thanks for the replies, and yes, I agree that it’s extremely blurred when one considers just how much “software defined” hardware there is under development. You’re right though – the data that streams out of any ADC is a time-representation of the voltage going into the ADC, so an SDR has the time representation and the software is doing the FFT magic to turn it into a frequency domain representation. That’s the whole idea behind SDR. As a result, if you record the “IF” of the SDR as an I/Q signal (most SDRs do some digital or analog downconversion of the input first), then you’re actually getting the time domain data!

      But unfortunately, as most SDRs have front end selectors and downconverters attached, it means that the time domain representation is often frequency shifted (e.g. DC component = signal at x Mhz), which means that the time domain representation might not be really handy (think, oscilloscope but not sensitive from DC to x Mhz). The other thing is that SDRs aren’t really designed for a (necessarily) linear front-end response when it comes to signal intensity – often moderate to strong amplitude distortions have little real effect on modulated signals (e.g. FM) which the SDRs are going to be used with, although the better SDRs will always have better dynamic range and linearity figures.

      It all comes down to purpose – a device designed for signal analysis will do almost anything to ensure the accuracy of the signal representation – and isn’t likely to accept nasty compromises to chase sensitivity, etc.

      A waterfall display is a frequency domain plot, with respect to time … at least, that’s how I think of it. It can get really confusing!!! The BladeRF SDR does sound interesting – after all, high-high rate ADCs are very expensive to buy and “feed” properly, so I can accept that it’s often going to be a case of heterodyning your signals appropriately – I’m sure a USRP with the right modules will do very similar things.

      – Gough

    • lui_gough says:

      Dear Drew (again),

      Just had a thought about your comment about using CW mode – in fact, CW mode is basically a “very narrowly filtered” SSB demodulation mode. In fact, SSB can be demodulated by “frequency shifting” it in the frequency domain so that the carrier frequency is at zero hertz. The audio that you get out is the time-domain representation of the signal content from carrier frequency (at DC) to the end of the passband (e.g. carrier+3khz at 3khz in the output file).

      What you would get from CW mode is a sort of demodulation, in the sense it’s a frequency-shifted and filtered chunk of the RF, if you get what I’m saying (after all, SSB is a sort of AM – the amplitude of the frequency components convey the information). It will be the same as tuning an SDR to the carrier frequency (as that will perform digital or analog downconversion to shift carrier frequency to DC) with an ADC that has the same bandwidth as the passband frequency (so that the output doesn’t involve higher frequency components from transmissions outside the bandwidth, say adjacent SSB channels) and examining the IQ samples directly.

      Sorry if I’ve served to confuse you further – I can’t claim to be an RF or SDR expert, but this is my understanding.

      – Gough

  3. drew wollin says:

    Hi Gough

    CW is just the TX carrier turned on and off with a key. It is demodulated in a non-DSP receiver by using a beat frequency oscillator (BFO) and mixing the two together to get a tone. The pitch of the tone is changed by tuning away from the TX frequency (SSB is done the same way with the BFO replacing the carrier). I am not sure how CW is done with DSP, but it uses I and Q, 90 degree phase shift between the two. The tone is still determined by tuning away from the TX signal.

    CW doesn’t have to have a narrow filter, there can be no filter. Filters are only used when there are nearby signals that are interfering..

    To a point, I was side-tracked by de-modulation in thinking of a visual time domain display in a SDR. It was more a line of thought not argument.

    The waterfall is a mix of time, the rate the waterfall moves, amplitude as colour, and frequency from the position in the spectrum. It is a mix of the time and frequency domains.

    What I was wondering is how to get a time domain visualization from a SDR, for it to be an oscilloscope for part of the spectrum; all a bit confusing. An oscilloscope is “tuned” by altering the time-base, so waveforms at different frequencies and be observed.

    I am reading a few books on digital signal processing to try and get a bit more understanding.

    Regards Drew VK4ZXI

  4. Pingback: Reverse Engineering: The USB Charger Doctor | Gough's Tech Zone

Error: Comment is Missing!