Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Imaging Source Camera (IC4) for experiments with Python #25

Open
KalpitBakal opened this issue Nov 15, 2024 · 20 comments
Open

Using Imaging Source Camera (IC4) for experiments with Python #25

KalpitBakal opened this issue Nov 15, 2024 · 20 comments
Labels
python python specific issue

Comments

@KalpitBakal
Copy link

Hello!
I have been using Imaging Source camera with IC Capture for my microscopy experiments for a while now. However, now I want to automate a few things and save the new API of IC4 in Python. I will explain a few things what I want to achieve. It would be great if you could nudge me in the right directions with the examples and the documentation. I have a basic knowledge in Python. So here it goes,

  1. I want to use the Imaging Source camera to display the live video in a window. I know I could use cv2 to show the webcam of my laptop live. What function/class should I look into to display the live video of the Imaging Source Camera on window.

  2. In addition to point 2, I want to save this live video into a folder just as I can do in the IC Capture application. I would like to set the format, fps etc

  3. In addition to point 2 and 3, I would like to snap an image after a set time and save it to a particular folder. Is it possible to use time.sleep() and not disrupt the live video display and saving? Because I know this is a problem when I want to use it with cv2 and the webcam.

If doing all this 3 steps in one python function is not possible, please let me know that too. This is a general description of my project but if you want more details I will give more explanation happily.

@TIS-Tim
Copy link
Contributor

TIS-Tim commented Nov 15, 2024

The qt6-first-steps example shows how to display live video: https://github.com/TheImagingSource/ic4-examples/tree/master/python/qt6/qt6-first-steps

The demoapp example expands on this to manually allow saving single images and video files: https://github.com/TheImagingSource/ic4-examples/tree/master/python/qt6/demoapp

The documentation contains a section explaining how to programmatically configure camera parameters like resolution and exposure time: https://www.theimagingsource.com/en-us/documentation/ic4python/guide-configuring-device.html (frame rate is not shown there, but setting ic4.PropId.ACQUISITION_FRAME_RATE will do the trick.

The easiest way to build some regular/timed image capture is probably to extend one of the Qt examples with a Qt timer. sleep will block the UI thread and make the program unresponsive.

@KalpitBakal
Copy link
Author

Hello Tim,

Thank you for your quick response. I will look into these examples.

The easiest way to build some regular/timed image capture is probably to extend one of the Qt examples with a Qt timer. sleep will block the UI thread and make the program unresponsive.

Will this stop showing the live video and stop saving the video in the folder as well?

@TIS-Tim
Copy link
Contributor

TIS-Tim commented Nov 15, 2024

Yes, rendering using ic4.pyside6.DisplayWidget requires main thread interaction. If you do time.sleep on the main thread, live display will halt.

@KalpitBakal
Copy link
Author

Then basically I will have to use Parallel Processing in Python and run this on a separate thread?

@TIS-Tim
Copy link
Contributor

TIS-Tim commented Nov 15, 2024

No, just set up a timer: https://doc.qt.io/qtforpython-6/PySide6/QtCore/QTimer.html

In the connected callback, do what you want to do after the time has elapsed, like capture the next image. No need to create threads.

@KalpitBakal
Copy link
Author

No, just set up a timer: https://doc.qt.io/qtforpython-6/PySide6/QtCore/QTimer.html

In the connected callback, do what you want to do after the time has elapsed, like capture the next image. No need to create threads.

But will this still stop the live display and saving video to a file right?

@TIS-Tim
Copy link
Contributor

TIS-Tim commented Nov 15, 2024

No, that will just register a function to be called later. The program continues on doing whatever it does.

@KalpitBakal
Copy link
Author

On a side note, if we use the Imaging Source Camera as a webcam, will there a problem like jumping frames amongst many others? Or you think it will work smoothly?

@TIS-Tim
Copy link
Contributor

TIS-Tim commented Nov 15, 2024

What do you mean by "use as a webcam"?

Please direct questions unrelated to ic4 to our normal support channel: https://www.theimagingsource.com/en-de/company/contact/

@TIS-Tim TIS-Tim added the python python specific issue label Nov 15, 2024
@Skullface9512
Copy link

Hi, i want to know how to set white balance auto as off in python

@TIS-Tim
Copy link
Contributor

TIS-Tim commented Nov 22, 2024

The Documentation explains in detail how to configure device settings.

The constant for auto white balance is ic4.PropId.BALANCE_WHITE_AUTO. To disable auto white balance, set it to "Off".

@Skullface9512
Copy link

Rencently i come across some problems, i want to take photos from two devices at the same time, the method i tried is:

grabber = ic4.Grabber()
grabber1 = ic4.Grabber()
first_device_info = ic4.DeviceEnum.devices()[0]
second_device_info = ic4.DeviceEnum.devices()[1]
grabber.device_open(first_device_info)
grabber1.device_open(second_device_info)

grabber.device_property_map.set_value(ic4.PropId.EXPOSURE_AUTO, "Off")
grabber1.device_property_map.set_value(ic4.PropId.EXPOSURE_AUTO, "Off")
grabber.device_property_map.set_value(ic4.PropId.EXPOSURE_TIME, 55555)
grabber1.device_property_map.set_value(ic4.PropId.EXPOSURE_TIME, 55555)
grabber.device_property_map.set_value(ic4.PropId.GAIN_AUTO, "Off")
grabber1.device_property_map.set_value(ic4.PropId.GAIN_AUTO, "Off")
grabber.device_property_map.set_value(ic4.PropId.GAIN, 3)
grabber1.device_property_map.set_value(ic4.PropId.GAIN, 3)
grabber.device_property_map.set_value(ic4.PropId.BALANCE_WHITE_AUTO,"Off")
grabber1.device_property_map.set_value(ic4.PropId.BALANCE_WHITE_AUTO,"Off")
grabber.device_property_map.set_value(ic4.PropId.ACQUISITION_FRAME_RATE,1.5)
grabber1.device_property_map.set_value(ic4.PropId.ACQUISITION_FRAME_RATE,1.5)
grabber.device_property_map.set_value(ic4.PropId.WIDTH, 3072)
grabber1.device_property_map.set_value(ic4.PropId.WIDTH, 3072)
grabber.device_property_map.set_value(ic4.PropId.HEIGHT, 2048)
grabber1.device_property_map.set_value(ic4.PropId.HEIGHT, 2048)

sink = ic4.SnapSink()
sink1 = ic4.SnapSink()
grabber.stream_setup(sink, setup_option=ic4.StreamSetupOption.ACQUISITION_START)
grabber1.stream_setup(sink1, setup_option=ic4.StreamSetupOption.ACQUISITION_START)

buffer = sink.snap_single(1000)
buffer.save_as_bmp("./images/L_" + str(1) + ".png")
buffer1 = sink1.snap_single(1000)
buffer1.save_as_bmp("./images/R_" + str(1) + ".png")

But it didn‘t save image and show this:

Traceback (most recent call last):
  File "E:\Project\Xeryon_motor_control\ploy_fit.py", line 63, in <module>
    buffer = sink.snap_single(1000)
  File "E:\Anaconda\envs\py38\lib\site-packages\imagingcontrol4\snapsink.py", line 164, in snap_single
    IC4Exception.raise_exception_from_last_error()
  File "E:\Anaconda\envs\py38\lib\site-packages\imagingcontrol4\ic4exception.py", line 46, in raise_exception_from_last_error
    raise IC4Exception(err, message.value.decode("utf-8"))
imagingcontrol4.ic4exception.IC4Exception: (<ErrorCode.Timeout: 27>, 'ic4_snapsink_snap_single: Timeout elapsed')

Can you give me some recommendations, thank you.

@TIS-Tim
Copy link
Contributor

TIS-Tim commented Dec 16, 2024

Are you sure that TriggerMode is Off for both devices?

@Skullface9512
Copy link

Sorry i'm not set TriggerMode and i set TriggerMode like that:

grabber.device_property_map.set_value(ic4.PropId.TRIGGER_MODE,"Off")
grabber1.device_property_map.set_value(ic4.PropId.TRIGGER_MODE,"Off")

the running result has no change, is there any problems.

@TIS-Tim
Copy link
Contributor

TIS-Tim commented Dec 16, 2024

Can you increase the timeout (e.g. 5000)? With a frame rate that low it might take a little longer than 1 second for the first image to arrive.

@Skullface9512
Copy link

It works, thank you

@TIS-Tim
Copy link
Contributor

TIS-Tim commented Dec 16, 2024

If your cameras are connected to USB3, you could also increase the frame rate to make everything go faster.

Run two instances of ic4-demoapp to figure out what the maximum frame rate without frame drops is (Probably about 30 unless you changed PixelFormat).

@Skullface9512
Copy link

Hi Tim,I tried to show capture video through opencv, the mothod is:

import imagingcontrol4 as ic4
import cv2
def example_imagebuffer_numpy_opencv_snap():
    device_list = ic4.DeviceEnum.devices()
    for i, dev in enumerate(device_list):
        print(f"[{i}] {dev.model_name} ({dev.serial}) [{dev.interface.display_name}]")
    print(f"Select device [0..{len(device_list) - 1}]: ", end="")
    selected_index = int(input())
    dev_info = device_list[selected_index]

    grabber = ic4.Grabber()
    grabber.device_open(dev_info)

    cv2.namedWindow("display")

    sink = ic4.SnapSink()
    # sink = ic4.QueueSink(accepted_pixel_formats=ic4.native.IC4_PIXEL_FORMAT.IC4_PIXEL_FORMAT_BayerGB8,max_output_buffers=1,listener=QueueSinkListener)
    grabber.stream_setup(sink)

    while True:

        buffer = sink.snap_single(1000)
        np = buffer.numpy_wrap()
        np_1=cv2.resize(np,(np.shape[1]//3,np.shape[0]//3), interpolation=cv2.INTER_LINEAR)
        cv2.imshow("display", np_1)
        cv2.waitKey(1)

    grabber.stream_stop()


if __name__ == "__main__":
    ic4.Library.init(api_log_level=ic4.LogLevel.INFO, log_targets=ic4.LogTarget.STDERR)

    try:
        example_imagebuffer_numpy_opencv_snap()
    finally:
        ic4.Library.exit()

but video is grayscale, can i set it as RGB video and how can i increase its frame rate, the device i used is DFM 37UX178-ML.

@TIS-Tim
Copy link
Contributor

TIS-Tim commented Dec 20, 2024

If you create your sink with an accepted_pixel_formats parameter, you can force an automatic buffer format conversion to RGB:

sink = ic4.SnapSink(accepted_pixel_formats=[ic4.PixelFormat.BGR8])

buffer will then contain RGB data (3 bytes per pixel; you may or may not have to adjust your OpenCV to match that.

@Skullface9512
Copy link

Hi Tim, today i tried to capture video, but come across a problem:

Traceback (most recent call last):
  File "E:\Project\ic4-examples-master\python\imagebuffer-numpy-opencv-snap.py", line 119, in <module>
    example_imagebuffer_numpy_opencv_snap()
  File "E:\Project\ic4-examples-master\python\imagebuffer-numpy-opencv-snap.py", line 72, in example_imagebuffer_numpy_opencv_snap
    buffer = sink.snap_single(1000)
  File "E:\Anaconda\envs\py38\lib\site-packages\imagingcontrol4\snapsink.py", line 164, in snap_single
    IC4Exception.raise_exception_from_last_error()
  File "E:\Anaconda\envs\py38\lib\site-packages\imagingcontrol4\ic4exception.py", line 46, in raise_exception_from_last_error
    raise IC4Exception(err, message.value.decode("utf-8"))
imagingcontrol4.ic4exception.IC4Exception: (<ErrorCode.Timeout: 27>, 'ic4_snapsink_snap_single: Timeout elapsed')

Then i tried "buffer = sink.snap_single(1000)" as "buffer = sink.snap_single(5000)", but the result has no changes, can you give me some recommendation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
python python specific issue
Projects
None yet
Development

No branches or pull requests

3 participants