发送使用统一相机拍摄的实时图像
Sending Real Time images captured using unity camera
服务器
private void SendImageByte()
{
image_bytes = cm.Capture();
print(image_bytes.Length);
if (connectedTcpClient == null)
{
return;
}
try
{
// Get a stream object for writing.
NetworkStream stream = connectedTcpClient.GetStream();
if (stream.CanWrite)
{
// string serverMessage = "This is a message from your server.";
// Convert string message to byte array.
byte[] serverMessageAsByteArray = Encoding.ASCII.GetBytes(image_bytes.ToString());
// Write byte array to socketConnection stream.
stream.Write(serverMessageAsByteArray, 0, serverMessageAsByteArray.Length);
Debug.Log("Server sent his message - should be received by client");
}
}
catch (SocketException socketException)
{
Debug.Log("Socket exception: " + socketException);
}
}
客户
import socket
host = "127.0.0.1"
port = 1755
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host, port))
def receive_image():
data = sock.recv(999999).decode('utf-8')
print(len(data))
while True:
receive_image()
此处脚本从 unity 相机捕获图像
public byte[] Capture()
{
if(renderTexture == null)
{
// creates off-screen render texture that can rendered into
rect = new Rect(0, 0, captureWidth, captureHeight);
renderTexture = new RenderTexture(captureWidth, captureHeight, 24);
screenShot = new Texture2D(captureWidth, captureHeight, TextureFormat.RGB24, false);
}
// _camera = GetComponent<Camera>();
_camera.targetTexture = renderTexture;
_camera.Render();
// reset active camera texture and render texture
_camera.targetTexture = null;
RenderTexture.active = null;
// read pixels will read from the currently active render texture so make our offscreen
// render texture active and then read the pixels
RenderTexture.active = renderTexture;
screenShot.ReadPixels(rect, 0, 0);
screenShot.Apply();
byte[] imageBytes = screenShot.EncodeToPNG();
//Object.Destroy(screenShot);
//File.WriteAllBytes(Application.dataPath + "/../"+ imagePath + "/img{counter}.png", bytes);
//counter = counter + 1;
return imageBytes;
}
我正在尝试使用要处理的套接字通信将 Unity3D 上的实时图像从 C# 发送到 python,并 return 返回统一值,但问题甚至是接收到的字节长度客户端与服务器不同。我发送了大约 400K 字节但我只收到了 13
C#是服务端,python是客户端
或者我做错了,但我想创建模拟器的主要目标是大胆的自动驾驶
您确定 image_bytes.ToString()
returns 是您所期望的,而不是 "System.Byte[]"
=> 13 个字符 => 13 个字节。
一般来说,为什么要将已经 byte[]
转换为 string
只是为了将其转换回 byte[]
以便发送?我很确定您不想使用 UTF-8 传输二进制图像数据……一种选择可能是 Base64 字符串,但那样效率仍然很低。
只需发送原始字节,例如
stream.Write(image_bytes, 0, image_bytes.Length);
然后接收,直到接收到该长度。
一个典型的解决方案是在要发送的消息的长度之前加上接收方实际等待,直到您收到恰好该数量的字节,例如
var lengthBytes = BitConverter.GetBytes(image_bytes.Length);
stream.Write(lengthBytes, 0, lengthBytes.Length);
stream.Write(image_bytes, 0, image_bytes.Length);
现在您知道在接收端您首先必须准确接收 4
字节(== 一个 int),这将告诉您实际有效负载接收的确切字节数。
现在我不是 python 专家,但在谷歌搜索了一下之后我觉得是这样的
def receive_image()
lengthBytes = sock.recv(4)
length = struct.unpack("!i", lengthBytes)[0]
data = sock.recv(length)
注意:在阅读了 John Gordon 对这个问题的评论后,我想这仍然不能完全解决等待实际填充相应缓冲区的问题 - 正如没有 python 专家所说 - 但我希望它能让您知道去哪里 ;)
服务器
private void SendImageByte()
{
image_bytes = cm.Capture();
print(image_bytes.Length);
if (connectedTcpClient == null)
{
return;
}
try
{
// Get a stream object for writing.
NetworkStream stream = connectedTcpClient.GetStream();
if (stream.CanWrite)
{
// string serverMessage = "This is a message from your server.";
// Convert string message to byte array.
byte[] serverMessageAsByteArray = Encoding.ASCII.GetBytes(image_bytes.ToString());
// Write byte array to socketConnection stream.
stream.Write(serverMessageAsByteArray, 0, serverMessageAsByteArray.Length);
Debug.Log("Server sent his message - should be received by client");
}
}
catch (SocketException socketException)
{
Debug.Log("Socket exception: " + socketException);
}
}
客户
import socket
host = "127.0.0.1"
port = 1755
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host, port))
def receive_image():
data = sock.recv(999999).decode('utf-8')
print(len(data))
while True:
receive_image()
此处脚本从 unity 相机捕获图像
public byte[] Capture()
{
if(renderTexture == null)
{
// creates off-screen render texture that can rendered into
rect = new Rect(0, 0, captureWidth, captureHeight);
renderTexture = new RenderTexture(captureWidth, captureHeight, 24);
screenShot = new Texture2D(captureWidth, captureHeight, TextureFormat.RGB24, false);
}
// _camera = GetComponent<Camera>();
_camera.targetTexture = renderTexture;
_camera.Render();
// reset active camera texture and render texture
_camera.targetTexture = null;
RenderTexture.active = null;
// read pixels will read from the currently active render texture so make our offscreen
// render texture active and then read the pixels
RenderTexture.active = renderTexture;
screenShot.ReadPixels(rect, 0, 0);
screenShot.Apply();
byte[] imageBytes = screenShot.EncodeToPNG();
//Object.Destroy(screenShot);
//File.WriteAllBytes(Application.dataPath + "/../"+ imagePath + "/img{counter}.png", bytes);
//counter = counter + 1;
return imageBytes;
}
我正在尝试使用要处理的套接字通信将 Unity3D 上的实时图像从 C# 发送到 python,并 return 返回统一值,但问题甚至是接收到的字节长度客户端与服务器不同。我发送了大约 400K 字节但我只收到了 13 C#是服务端,python是客户端
或者我做错了,但我想创建模拟器的主要目标是大胆的自动驾驶
您确定 image_bytes.ToString()
returns 是您所期望的,而不是 "System.Byte[]"
=> 13 个字符 => 13 个字节。
一般来说,为什么要将已经 byte[]
转换为 string
只是为了将其转换回 byte[]
以便发送?我很确定您不想使用 UTF-8 传输二进制图像数据……一种选择可能是 Base64 字符串,但那样效率仍然很低。
只需发送原始字节,例如
stream.Write(image_bytes, 0, image_bytes.Length);
然后接收,直到接收到该长度。
一个典型的解决方案是在要发送的消息的长度之前加上接收方实际等待,直到您收到恰好该数量的字节,例如
var lengthBytes = BitConverter.GetBytes(image_bytes.Length);
stream.Write(lengthBytes, 0, lengthBytes.Length);
stream.Write(image_bytes, 0, image_bytes.Length);
现在您知道在接收端您首先必须准确接收 4
字节(== 一个 int),这将告诉您实际有效负载接收的确切字节数。
现在我不是 python 专家,但在谷歌搜索了一下之后我觉得是这样的
def receive_image()
lengthBytes = sock.recv(4)
length = struct.unpack("!i", lengthBytes)[0]
data = sock.recv(length)
注意:在阅读了 John Gordon 对这个问题的评论后,我想这仍然不能完全解决等待实际填充相应缓冲区的问题 - 正如没有 python 专家所说 - 但我希望它能让您知道去哪里 ;)