If you want your installation to perform better, please:. If after that, and the server being optimized, we can see which scenarios could be improved significantly by optimizing the client-server communication beyond what's possible with WebDAV today. Device has however a large amount of SD card class 10 as swap and maybe uploading multi-threaded would form kind of non-blocking behavior and improve performance. I would say even a second upload thread will make a big difference.
When you say taht the server component needs fixing, you mean that the issue is in the core, not in mirall, right? As we're still experiencing problems to deal with a large number of files, do you think it should be reopened? Not sure if describes it best. Feel free to open a new issue in core. Make sure to link it here. I also strongly believe now that the server component has to be revised for a faster maybe non blocking solution.
I'd like to change too but a new install is currently no option for me. The client creates a lot of HDD noise, as if it was searching something in hdd, then suddenly a a click and file is uploaded. Upload is an instant. There is nothing wrong with HDD, and besides it is really fast..
That's great. I think the priority for fixing this server-side should be raised - otherwise the parallel upload will kill multi-user installations by increasing suddenly the request rate.
Good point. Overall it's not that easy to solve the session handling in the current code bases. The general approach is to open the session as late as possible and close it as early as possible.
We tried at least on approach e. This is very high on the todo list. Let's look into that after the 6. Im not sure if it is the same issue, but deleting many Files or Folders is also really slow. I also have this problem, the performance is terrible, this is with OC 6. Same problem, OC 6. As far as I can see in all the comments, it's not the case either. I have 1. For small files I'm seeing maybe 1 ever 3 seconds, which seems fast until you are looking at 50, or so.
It should normaly upload 3 files in parallel. The other problem is that the server might have performence issue. Check the CPU usage on the server. Note: If you limit the upload or download speed in the settings, it won't use parallel uploads. Sorry I meant to provide that info, the client is set at "No limit" for both down and upload.
I'm using "tcptrack -i eth0" on the server side to watch the tcp connections and their speeds. Mostly I only see a single connection on port from the client, and occasionally I see a second connection from the client, presumably the client app checking in. I wonder if I have some legacy config params set on the windows client? I would doubt it since I installed it for the first time yesterday and didn't make any changes to the Network tab.
There might be still problem on the server being slow especialy if using a slow server or sqlite , but that's nothing we can do in the client. Hi all, IMHO the speed issue is within the manner chosen to handle the process of nested iterations over filerecords in php-code!
It just needs a proper rethink where one good SQL query delivers only one good recordset over which the code iterates only once. Only branching off to do perhaps one insert or update statement, which may create multiple records or updates. UTM Firewall. Thread Info.
RSS More Cancel. This discussion has been locked. UTM ownCloud very slow download. Any luck with either of those guesses? Cheers - Bob. MediaSoft, Inc. Up 0 Down Cancel. I did some more tests. Linksys Router connected to modem with W7 laptop test with iperf3 are the results good 3. Is this a problem in combination with ESXi and Linux? I'm experiencing similar slowness.
Can you guys try downloading with curl on the command line? The command would look like this:. So far I still suspect that Dolphin or an underlying lib is capping the speed for some reason. Another thing to test: setup an Apache DAV server not ownCloud and test downloading with Dolphin against it and see if the speed is also capped. Maybe it's using parallel downloads. I would be interested to see the commands send by the davfs libs..
As the normal download is just a get. Client does no paralleism for downloading a single file. First let's see what curl says for both jsalatiel and elvetemedve who I suspect have different issues env vs lib. PVince81 Basically the speed is the same with curl. So the bottleneck must be on the server side. I tracked down the problem and I have to apologise, because OC works perfectly. It was a networking issue. I'm going to explain what happened, it might be useful for others.
I run an OC instance on my "home server" which means there is a machine at home connected to the Internet. See the topology below:. When I use the internal domain name my-oc. In this case the download speed is the maximum wifi speed is the bottleneck.
But when I use the public one my-oc. After that the packet arrives at the WAN interface of the wireless router, which is configured to apply some traffic shaping on this interface. And finally the router has a forward rule to send the packet to the OC server. The conclusion is that the packet should not reach the Gateway at all and treated as external traffic , but sent to the internal OC server just like when I use the private domain name which points to the private IP of OC server.
0コメント