This video tutorial demonstrates how easy and fast one can solve detection application
with our new Visionary-T DT.
The following things are used for this tutorial:
A Visionary-T DT 3D snapshot sensor, a power cable, an ethernet cable, a signal light
connected to the digital outputs of Visionary-T DT and a lattice box
as the so called "Lattice box application" is ideal for the purpose of this tutorial.
Now, as you can see on the screen,
I have already installed and embedded Visionary-T DT into the SOPAS ET software.
If you don't know how to do this, please refer to our USB-content
delivered with the sensor, specifically to the document called: "SOPAS installation and embedding Visionary-T".
To get to the homepage of the sensor I double-click on the project here on the left hand side.
A new window opened and I am now on the homepage of Visionary-T DT.
In order to be able to configure the sensor, we first have to login as "Service".
The Service password is provided in the GUI-presentation
that is on the USB or amongst the USB-content.
Now, before we go on with the actual configuration of the sensor, we first have a look at the
setup and define which detection tasks shall be solved.
Now here is the actual setup and we can see that Visionary-T DT is mounted on the sealing,
facing down on the lattice box.
We now want to configure Visionary-T DT in such a way, that we
on the one hand
are able to detect whether an object is inside the lattice box or not
and on the other hand
whether the door on the side is open or whether it's closed.
Now that we are aware of the setup and the actual detection tasks,
in a next step and as we want to permanently save this configuration on the sensor itself,
we create a specific configuration for these tasks.
We do so by clicking on this dropdown menu and selecting "New Job".
To adjust the specific detection settings of the sensor,
we go to the visual settings page by clicking "visual settings".
Now what we see here on the left hand side is the 2D-Intensity image
showing the lattice box
and here on the right hand side, we have the 3D-Pointcloud that is captured by the sensor.
Now, in order to go through all the settings, we simply follow these four steps here on
the right hand side, starting with the mounting settings.
On the mounting settings page we find three presets
and in our case, the preset on the left is exactly the configuration we have with our lattice box,
meaning the camera mounted on the sealing facing down on to the lattice box.
Now, by clicking on this preset, we find that the mounting setting is already
pretty well adjusted to the real setup, but it still needs some fine tuning.
This is done by first selecting "Sensor position".
And to adjust the sensor position we have to possibilities.
We can go here into the viewer and actually drag and drop the camera itself.
In this case, I can see that the points originating from the floor here in my lab
are not aligned with the virtual floor of the camera depicted by this checkerboard.
So I need to align these pixels with our virtual floor by moving down the camera.
Now let's have a quick look from the side, how well this matches and this is already
a pretty good match.
What I can see here now, is that the camera is slightly tilted with respect to the floor
and I have to adjust the sensor orientation.
This is done by clicking on "Sensor orientation" and again I have the option to either use
the buttons here or to directly drag and drop by using the handles here in the 3D-viewer.
In this case, the green handle is the correct one and I will adjust the orientation of the sensor.
Again,
we check whether the points coming from the floor match the checkerboard floor
and we see that it still requires a slight tilt
and that we need to move down the sensor a bit further.
This time, I use the input here and I go to a height of 1.9 meters,
which is actually to high so lets go down to 1.8 meters
and
this is a pretty good match.
So for now, we are done with the mounting settings and can go on with the next step,
meaning the adjustment of the Time-of-Flight settings.
For the Time-of-Flight settings, we keep the confidence algorithm with the default value "Continuous acquisition".
As there is only one Visionary-T here, we can stay with code one
and also the acquisition mode can stay with standard.
In order to also see black objects, we keep an integration time of 4 milliseconds,
which is the default value.
As we don't have any objects in the scene, that are further away than 7.2 meters
from the camera we can use the short range non-ambiguity mode.
We leave the Time-of-Flight sensitivity with the default value.
Also the average mode is off.
After we have now adjusted the Time-of-Flight settings, we go on by adjusting the data filters.
Now as you can see here on the data filters page, there are various filters that can be applied.
In this case here or for this application, I actually only want to apply one filter.
If you are interested in the individual filters, please refer to
the GUI-presentation provided with the USB-flash drive.
Now, in our case, I only want to use the intensity based filters, in order to get rid of those
pixels that do not originate from the lattice box itself, but from the floor around the lattice box.
If I hover over these pixels originating from the floor, you can see
in the lower right of this 3D-viewer that the intensity of these pixels is quite low.
I will use this fact, to actually filter those pixels.
Now lets see what happens, if I increase the lower threshold of this filter.
What we see is that at a value of around 50 decibels mainly only the lattice box remains
and the pixels coming from the floor are filtered out.
This is pretty much what I want to achieve.
It's not necessary to apply any more filters.
So let's return to the main settings page and let's enter the detection configuration itself.
The detection configuration itself is again divided into four steps.
We recommend following these four steps as they are numbered from one to four.
So let's start with the detection volume.
Now, in this case, what we need to do is, we need to adjust the detection volume
such that it covers the actual volume we want to observe any detection within.
Now, if I use the viewer and zoom out, I can see that right now
this detection volume doesn't match my lattice box at all.
So I need to move this volume of interest,
such that it matches my actual desired volume of interest
I can do so, using again the viewer and drag and drop the volume of interest
or if I know the specific physical lower and upper limits of my volume of interest,
I can directly put in the numbers here, but for now, I will start with drag and drop.
So I'm moving the volume of interest onto the lattice box.
Let's readjust our view.
Let's say we view it from the top.
Let's center the volume of interest on the lattice box.
We have now centered this in X and Y.
Let's have a look at the volume of interest from the side.
We can see that no adjustments have to be made regarding the upper limit in Z-direction,
but the lower limit should be changed.
Now lets see, if we cover the whole lattice box.
This is the case right now,
so we now go on partitioning the volume of interest into individual cuboids.
I know for this application a partitioning of 12 times 10, so overall 120 cuboids, works really well.
Now, let's have a look at this from the bottom and we can see that the partitioning
is done, such that the edges of the box fall into individual rows of the cuboids
which will later help us to actually detect whether the door of this lattice box
is opened or closed and to detect whether there is an object inside the lattice box.
Let's go to the second step that is adjusting the cuboid heights.
In this step, we can adjust the height of every individual cuboid itself, meaning the
meaning the upper and lower limit of each of the cuboids.
Now for this application, this is not necessary and I will simply globally adjust the height of all cuboids,
which I do by pressing "Select all" and then just lowering the upper limit of all the cuboids
such that they roughly match the upper limit of the lattice box.
I will do the same thing for the lower limit by moving the lower limit a bit up,
such that it roughly matches the bottom of this lattice box.
In the next step, we will now go to the cuboid groups.
In this step the individual cuboids are assigned to groups.
In our case,
as we want to detect whether the box is open or not,
meaning the door of this box is closed or open,
we can define one group for this specific task and I do so by clicking on the individual
cuboids that shall belong to this group.
I can either do so by clicking on the individual cuboids or
I have the option again to just use my mouse to mark the individual cuboids.
Now lets have a look at this from the side,
you can see that if I hover over the individual cuboids
that the volume that is covered by these cuboids, is also visualized by a line
that connects the upper and lower limits of this cuboid.
I have selected these cuboids, so that the volume covered by this group of cuboids
covers the door, which is here on the side of the lattice box.
I now assign these cuboids, which I have marked by clicking the plus button for group one,
so these cuboids now belong to group one.
I will name this group "door".
We can see that eight cuboids belong to this group, if we want we can also change
the color of this group, but I will leave it with red.
Now the second detection task is to see whether there is an object or several objects
inside the lattice box.
I will use the top view to do so,
zoom in a bit
and use the mouse to mark all the cuboids
that cover the volume inside the lattice box.
Now lets see if this is properly done.
What I see right now, is that I can also add these cuboids here.
Now, that I have marked these cuboids, I will add them to group two by clicking the plus button.
This group is now called "box_content"
and we can see that I added 60 cuboids to this group.
By clicking again on "door", I can see those cuboids that belong to the door group and
by clicking on "box_content" I can see the cuboids that belong to the box_content group.
Now lets go to the next step, the digital outputs.
In this menu, I assign the groups to different digital outputs.
In my case, I assign the door group to output three and the box_content group to output four.
Simply, because I have connected my signal lamp to the outputs three and four.
This means whenever there is a detection in the door group I will get a signal on output three
and if there is a detection in the box itself I will get a signal at digital output four.
I will leave the Off-Delay with zero milliseconds, as we do not need any Off-Delay for this application.
Now lets go back to the main detection settings.
As you can see down here, a few more things can be adjusted.
First, we can adjust the detection sensitivity, which defines how strong
the deviation from the taught scene has to be, in order to trigger a detection.
I will leave this with "High" as we also want to see smaller objects within the lattice box.
I also leave the Outlier suppression with the default value of three, but I will increase
the Multiple sampling from one frame to three frames, as this is a static application.
A multiple sampling of three means that it requires a detection
in three consecutive frames in order to trigger a detection.
Before we are done with the whole configuration of our 3D-Sensor, there are two things we need to do.
One thing that is crucial is to teach the scene and the volume of interest to the sensor
and I will do so by clicking "teach" here.
The scene has now been taught and in a final step
I will save this configuration permanently on the sensor by clicking this button here.
Now, I can also use this configuration once I have powered off the sensor and power it on again.
Now lets have a closer look and see how well our configuration performs in terms of solving the application.
Its best if we do so by following the signal lamp, when we open and close the door
or when we put objects inside the lattice box.
Of course, you can also have a look at the 3D-View here, because detections in the individual
cuboids will be indicated by a changing color.
So none-active cuboids are shown in blue color.
Once there is an active detection, their color will change to violet.
Now lets see what happens if we open and close the door or move objects inside the box.
Không có nhận xét nào:
Đăng nhận xét